From minoru.nishikubo at lyz.jp Wed Mar 1 00:56:38 2017 From: minoru.nishikubo at lyz.jp (Nishikubo Minoru) Date: Wed, 1 Mar 2017 09:56:38 +0900 Subject: set_real_ip_from, real_ip_header directive in ngx_http_realip_module In-Reply-To: <20170228134015.GL34777@mdounin.ru> References: <20170228134015.GL34777@mdounin.ru> Message-ID: Hello, Maxim I understand your explanation and thanks for reply. I tried to replace $binary_remote_addr (not $remote_addr for performance reason) with True-Client-IP header which is Akamai CDN Server will send, via ngx_http_limit_req_module and use as a shared memory zone key. On Tue, Feb 28, 2017 at 10:40 PM, Maxim Dounin wrote: > Hello! > > On Tue, Feb 28, 2017 at 09:58:05AM +0900, Nishikubo Minoru wrote: > > > Hello, > > I tried to limit an IPv4 Address with ngx_http_limit_req module and > > ngx_realip_module via Akamai would send True-Client-IP headers. > > > > According to the document ngx_http_readip_module( > > http://nginx.org/en/docs/http/ngx_http_realip_module.html), > > we can write set_real_ip_from and real-_ip_header directive in http, > > server, location context. > > > > But, in the above case(ngx_http_limit_req module is defined the key in > http > > context), directives on ngx_http_realip_module must be defined before the > > keys(a.k.a replaced IPv4 adress by ngx_http_realip_module) and followed > > limit_req_zone directive in http context. > > Not really. There is no such requirement, that is, there is need > to place limit_req_zone and set_real_ip_from on the same level or > even in a particular order. > > For example, the following configuration will work perfectly: > > limit_req_zone $remote_addr zone=limit:1m rate=1r/m; > limit_req zone=limit; > > server { > listen 80; > > location / { > set_real_ip_from 127.0.0.1; > real_ip_header X-Real-IP; > } > } > > A problem may happen though if you configured the realip module in > a location context, but use the address in different contexts. > For example, the following will limit requests based on the > connection's address, not the one set with realip: > > limit_req_zone $remote_addr zone=limit:1m rate=1r/m; > limit_req zone=limit; > > server { > listen 80; > > location / { > try_files $uri @fallback; > } > > location @fallback { > set_real_ip_from 127.0.0.1; > real_ip_header X-Real-IP; > proxy_pass ... > } > } > > In the above configuration, limit_req will work at the "location /" > context, and the realip module in "location @fallback" won't be > effective. For more confusion, the $remote_addr variable will be > cached once used by limit_req, and attempts to use it even in the > location @fallback will return the original value, not changed by > the realip module. > > Summing up the above, it is certainly possible to use the realip > module with limit_req regardless of levels. They may interact > unexpectedly in complex configurations though, and hence it is > a good idea to avoid using set_real_ip_from / real_ip_header in > location context unless you understand what you are doing. > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aldernetwork at gmail.com Wed Mar 1 01:18:59 2017 From: aldernetwork at gmail.com (Alder Netw) Date: Tue, 28 Feb 2017 17:18:59 -0800 Subject: Nginx with a ICAP-like front-end In-Reply-To: References: Message-ID: Rajeev, you meant asking the validator or client to set X-Accel-Redirect? Could you elaborate here? Thanks, On Mon, Feb 27, 2017 at 9:14 AM, Rajeev J Sebastian < rajeev.sebastian at gmail.com> wrote: > Adler, maybe you should try X-Accel-Redirect to avoid this conversion of > POST to GET? > > On Mon, Feb 27, 2017 at 10:41 PM, Rajeev J Sebastian < > rajeev.sebastian at gmail.com> wrote: > >> From the docs it seems that this will work for all requests EXCEPT that, >> the @success fallback request will always be GET. >> >> On Mon, Feb 27, 2017 at 10:30 PM, Alder Netw >> wrote: >> >>> Thanks Rajeev for the recipe, I was looking into using subrequest but >>> found subrequest >>> only works for filter module. This looks much simpler! The only concern >>> is error_page >>> is only for GET/HEAD not for POST? >>> >>> >>> >>> On Sun, Feb 26, 2017 at 11:48 PM, Rajeev J Sebastian < >>> rajeev.sebastian at gmail.com> wrote: >>> >>>> Not sure if this is foolproof ... but maybe you can use the error_page >>>> fallback by responding with a special status_code. >>>> >>>> http://nginx.org/en/docs/http/ngx_http_core_module.html#error_page >>>> >>>> location / { >>>> >>>> proxy_pass http://validator; >>>> error_page 510 = @success; >>>> } >>>> >>>> location @success { >>>> proxy_pass http://realbackend; >>>> } >>>> >>>> >>>> On Mon, Feb 27, 2017 at 3:41 AM, Alder Netw >>>> wrote: >>>> >>>>> Or is there any existing module that can be adapted to achieve this? >>>>> Appreciate if someone can shed some light. Thx, >>>>> - Alder >>>>> >>>>> On Sat, Feb 25, 2017 at 9:24 PM, Alder Netw >>>>> wrote: >>>>> >>>>>> Hi I want to add an ICAP-like front-end validation server V with >>>>>> nginx. >>>>>> The user scenario is like this: >>>>>> >>>>>> The client will usually access the real app server R via nginx, but >>>>>> with >>>>>> a validation server V, the client request will first pass to V, V >>>>>> will dp certain >>>>>> validation and upon sucess the request will be forwarded to R and R >>>>>> will >>>>>> return directly to clients; Upon failure, the request will be denied. >>>>>> >>>>>> Is there any easy nginx config which can achieve this? Thanks, >>>>>> >>>>>> - Alder >>>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> nginx mailing list >>>>> nginx at nginx.org >>>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>>> >>>> >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>> >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjlp at sina.com Wed Mar 1 07:29:32 2017 From: tjlp at sina.com (tjlp at sina.com) Date: Wed, 01 Mar 2017 15:29:32 +0800 Subject: Issue about nginx removing the header "Connection" in HTTP response? Message-ID: <20170301072932.CFCB94C09D6@webmail.sinamail.sina.com.cn> Hi, nginx guy, In our system, for some special requests, the upstream server will return a response which the header includes "Connection: Close". According to HTTP protocol, "Connection" is one-hop header. So, nginx will remove this header and the client can't do the business logic correctly. How to handle this scenario? Thanks Liu Peng -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Mar 1 08:57:22 2017 From: nginx-forum at forum.nginx.org (zaidahmd) Date: Wed, 01 Mar 2017 03:57:22 -0500 Subject: NGINX - Reverse Proxy With Authentication at 2 Layers Message-ID: ** Problem Background ** I have an application, say app-A, which is running on a private network unreachable by public network. Now a new requirement needs to deliver the webpages of app-A to external users over public network. As a solution to expose app-A, I want to use NGINX as reverse proxy and will use two layers of authentication as explained below. Kindly advise if i am moving in the right direction in implementing the secure entry using NGINX. Reference Images attached at the end of email. ** Authentication Level 1 ** NGINX Auth Service As a solution to expose app-A, I want to use NGINX as reverse proxy and API gateway for External users to access the application in internal network. Once NGINX authenticates the request it will forward to app-A. ** Authentication Level 2 ** App-A performs Authentication After receiving request from nginx, app-A will perform its own authentication, ignoring that the request came pre-authenticated from NGINX. app-A will perform the authentication as app-A is to be kept unaware of the new NGINX reverse proxy and app-A will continue to work as is. ** Problem Situation ** NGINX Authentication service authenticates the request and sets a session-id in response so that it can identify the next request coming from the same client. As app-A also authenticates the request and puts the session-id in response. The problem here is that one session-id will get overriden by the other. Questions/Options in consideration : 1. (Image-ref-1) Is there anyway that I can configure NGINX to keep both the session-ids seperate in the request so that Auth service and app-A can recognise there own session informations for authenticated client. 2. (image-Ref-2) If both the session info cannot be saved, then can we configure NGINX to store session-id response of app-A and auth service both in its memory and only send the session-id of auth service back to client. And when the request comes back with Auth Service's session-id, NGINX should correlate the session of App-A and forward App-A's session to app-A. This way the request would get authenticated at both layers. 3. Which solution can be performed from the above 2 ? 4. Is it good approach to have 2 layers of authentication when NGINX's API gateway is used? If not then what configuration is required in app-A to not perform authentication for the requests coming from NGINX? Application environment java spring.? ** Links to Images ** Image-Ref-1 : http://i64.tinypic.com/27zbthj.gif Image-Ref-2 : http://i63.tinypic.com/35a2lbp.png Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272674,272674#msg-272674 From nginx-forum at forum.nginx.org Wed Mar 1 10:54:35 2017 From: nginx-forum at forum.nginx.org (user384829) Date: Wed, 01 Mar 2017 05:54:35 -0500 Subject: IPv6 upstream problem In-Reply-To: <44b29027-0666-bc2d-7442-5184caca747f@xtremenitro.org> References: <44b29027-0666-bc2d-7442-5184caca747f@xtremenitro.org> Message-ID: Did anyone have a solution for this? I also have many of these errors logged because I am using Google Container Engine that does not support IPv6. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272660,272675#msg-272675 From nginx-forum at forum.nginx.org Wed Mar 1 12:32:03 2017 From: nginx-forum at forum.nginx.org (user384829) Date: Wed, 01 Mar 2017 07:32:03 -0500 Subject: Large latency increase when using a resolver for proxy_pass Message-ID: Hi, I am wanting to resolve upstream hostnames used for proxy_pass inline with the TTL of the DNS record but when I add this configuration the response from nginx is MUCH slower. I've run tcpdump and the upstream DNS record is be resolved and reresolved as TTL would dictate. But requests are slower even when the TTL has not expired and nginx is not attempting to resolve the record. The difference is quite large: 0.3s vs 1s. Measured like this: curl -w "%{response_code} %{time_total}\n" -s -o /dev/null 'http://my_nginx_host/api/something_something == Slow Configuration === resolver 8.8.8.8 8.8.4.4 ipv6=off; server { listen 80; set $upstream_host https://my.upstream-host.com; location ~ /api/ { rewrite /api/(.*) /$1 break; proxy_pass $upstream_host; } } And tcpdump: 12:56:16.697158 IP 10.180.32.6.19343 > 10.76.8.92.80: Flags [P.], seq 2024041718:2024042038, ack 2611558079, win 4136, options [nop,nop,TS val 829926189 ecr 2513956729], length 320: HTTP: GET /api/something_something HTTP/1.1 12:56:16.697175 IP 10.76.8.92.80 > 10.180.32.6.19343: Flags [.], ack 320, win 229, options [nop,nop,TS val 2513956752 ecr 829926189], length 0 12:56:16.697287 IP 10.76.8.92.48696 > 23.210.228.34.443: Flags [S], seq 239123061, win 28400, options [mss 1420,sackOK,TS val 2513956752 ecr 0,nop,wscale 7], length 0 12:56:16.841393 IP 23.210.228.34.443 > 10.76.8.92.48696: Flags [S.], seq 2824997386, ack 239123062, win 28960, options [mss 1460,sackOK,TS val 1717524764 ecr 2513956752,nop,wscale 5], length 0 12:56:16.841423 IP 10.76.8.92.48696 > 23.210.228.34.443: Flags [.], ack 1, win 222, options [nop,nop,TS val 2513956896 ecr 1717524764], length 0 12:56:16.841585 IP 10.76.8.92.48696 > 23.210.228.34.443: Flags [P.], seq 1:290, ack 1, win 222, options [nop,nop,TS val 2513956896 ecr 1717524764], length 289 12:56:16.985287 IP 23.210.228.34.443 > 10.76.8.92.48696: Flags [.], ack 290, win 939, options [nop,nop,TS val 1717524908 ecr 2513956896], length 0 12:56:16.985428 IP 23.210.228.34.443 > 10.76.8.92.48696: Flags [P.], seq 1:4097, ack 290, win 939, options [nop,nop,TS val 1717524908 ecr 2513956896], length 4096 12:56:16.985442 IP 10.76.8.92.48696 > 23.210.228.34.443: Flags [.], ack 4097, win 286, options [nop,nop,TS val 2513957040 ecr 1717524908], length 0 12:56:16.986672 IP 23.210.228.34.443 > 10.76.8.92.48696: Flags [P.], seq 4097:4736, ack 290, win 939, options [nop,nop,TS val 1717524909 ecr 2513956896], length 639 12:56:16.986679 IP 10.76.8.92.48696 > 23.210.228.34.443: Flags [.], ack 4736, win 308, options [nop,nop,TS val 2513957041 ecr 1717524909], length 0 12:56:16.987530 IP 10.76.8.92.48696 > 23.210.228.34.443: Flags [P.], seq 290:416, ack 4736, win 308, options [nop,nop,TS val 2513957042 ecr 1717524909], length 126 12:56:17.131482 IP 23.210.228.34.443 > 10.76.8.92.48696: Flags [P.], seq 4736:4962, ack 416, win 939, options [nop,nop,TS val 1717525054 ecr 2513957042], length 226 12:56:17.131714 IP 10.76.8.92.48696 > 23.210.228.34.443: Flags [P.], seq 416:779, ack 4962, win 330, options [nop,nop,TS val 2513957186 ecr 1717525054], length 363 12:56:17.315103 IP 23.210.228.34.443 > 10.76.8.92.48696: Flags [.], ack 779, win 972, options [nop,nop,TS val 1717525238 ecr 2513957186], length 0 12:56:17.649290 IP 23.210.228.34.443 > 10.76.8.92.48696: Flags [P.], seq 4962:5766, ack 779, win 972, options [nop,nop,TS val 1717525572 ecr 2513957186], length 804 12:56:17.649310 IP 23.210.228.34.443 > 10.76.8.92.48696: Flags [P.], seq 5766:5797, ack 779, win 972, options [nop,nop,TS val 1717525572 ecr 2513957186], length 31 12:56:17.649366 IP 23.210.228.34.443 > 10.76.8.92.48696: Flags [F.], seq 5797, ack 779, win 972, options [nop,nop,TS val 1717525572 ecr 2513957186], length 0 12:56:17.649377 IP 10.76.8.92.48696 > 23.210.228.34.443: Flags [.], ack 5797, win 352, options [nop,nop,TS val 2513957704 ecr 1717525572], length 0 12:56:17.649566 IP 10.76.8.92.48696 > 23.210.228.34.443: Flags [F.], seq 779, ack 5798, win 352, options [nop,nop,TS val 2513957704 ecr 1717525572], length 0 12:56:17.649599 IP 10.76.8.92.80 > 10.180.32.6.19343: Flags [P.], seq 1:789, ack 320, win 229, options [nop,nop,TS val 2513957704 ecr 829926189], length 788: HTTP: HTTP/1.1 200 OK == Fast Configuration === server { listen 80; location ~ /api/ { rewrite /api/(.*) /$1 break; proxy_pass https://my.upstream-host.com; } } And tcpdump: 12:49:13.058495 IP 10.180.32.5.15708 > 10.76.5.82.80: Flags [P.], seq 4214721185:4214721506, ack 57908483, win 4136, options [nop,nop,TS val 829505219 ecr 2514027940], length 321: HTTP: GET /api/something_something HTTP/1.1 12:49:13.058510 IP 10.76.5.82.80 > 10.180.32.5.15708: Flags [.], ack 321, win 229, options [nop,nop,TS val 2514027965 ecr 829505219], length 0 12:49:13.058696 IP 10.76.5.82.32816 > 23.66.26.5.443: Flags [S], seq 722986122, win 28400, options [mss 1420,sackOK,TS val 2514027965 ecr 0,nop,wscale 7], length 0 12:49:13.064657 IP 23.66.26.5.443 > 10.76.5.82.32816: Flags [S.], seq 3299173152, ack 722986123, win 28960, options [mss 1460,sackOK,TS val 1717551633 ecr 2514027965,nop,wscale 5], length 0 12:49:13.064677 IP 10.76.5.82.32816 > 23.66.26.5.443: Flags [.], ack 1, win 222, options [nop,nop,TS val 2514027971 ecr 1717551633], length 0 12:49:13.064808 IP 10.76.5.82.32816 > 23.66.26.5.443: Flags [P.], seq 1:482, ack 1, win 222, options [nop,nop,TS val 2514027971 ecr 1717551633], length 481 12:49:13.070245 IP 23.66.26.5.443 > 10.76.5.82.32816: Flags [.], ack 482, win 939, options [nop,nop,TS val 1717551639 ecr 2514027971], length 0 12:49:13.070423 IP 23.66.26.5.443 > 10.76.5.82.32816: Flags [P.], seq 1:138, ack 482, win 939, options [nop,nop,TS val 1717551639 ecr 2514027971], length 137 12:49:13.070432 IP 10.76.5.82.32816 > 23.66.26.5.443: Flags [.], ack 138, win 231, options [nop,nop,TS val 2514027977 ecr 1717551639], length 0 12:49:13.070619 IP 10.76.5.82.32816 > 23.66.26.5.443: Flags [P.], seq 482:533, ack 138, win 231, options [nop,nop,TS val 2514027977 ecr 1717551639], length 51 12:49:13.070663 IP 10.76.5.82.32816 > 23.66.26.5.443: Flags [P.], seq 533:896, ack 138, win 231, options [nop,nop,TS val 2514027977 ecr 1717551639], length 363 12:49:13.076080 IP 23.66.26.5.443 > 10.76.5.82.32816: Flags [.], ack 896, win 972, options [nop,nop,TS val 1717551644 ecr 2514027977], length 0 12:49:13.287759 IP 23.66.26.5.443 > 10.76.5.82.32816: Flags [P.], seq 138:942, ack 896, win 972, options [nop,nop,TS val 1717551856 ecr 2514027977], length 804 12:49:13.287832 IP 23.66.26.5.443 > 10.76.5.82.32816: Flags [P.], seq 942:973, ack 896, win 972, options [nop,nop,TS val 1717551856 ecr 2514027977], length 31 12:49:13.287843 IP 10.76.5.82.32816 > 23.66.26.5.443: Flags [.], ack 973, win 243, options [nop,nop,TS val 2514028194 ecr 1717551856], length 0 12:49:13.287845 IP 23.66.26.5.443 > 10.76.5.82.32816: Flags [F.], seq 973, ack 896, win 972, options [nop,nop,TS val 1717551856 ecr 2514027977], length 0 12:49:13.287972 IP 10.76.5.82.32816 > 23.66.26.5.443: Flags [F.], seq 896, ack 974, win 243, options [nop,nop,TS val 2514028194 ecr 1717551856], length 0 12:49:13.288004 IP 10.76.5.82.80 > 10.180.32.5.15708: Flags [P.], seq 1:789, ack 321, win 229, options [nop,nop,TS val 2514028194 ecr 829505219], length 788: HTTP: HTTP/1.1 200 OK I know the upstreams in the two tcpdump examples here have different IP addresses but the upstream is Akamai with a TTL on 19s so they are constantly changing. But the the response time is absolutely consistent at 0.3s vs 1s. What am I missing here? Is it because the upstream host is using HTTPS? I am using version 1.11.10. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272679,272679#msg-272679 From mdounin at mdounin.ru Wed Mar 1 12:44:19 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 1 Mar 2017 15:44:19 +0300 Subject: Large latency increase when using a resolver for proxy_pass In-Reply-To: References: Message-ID: <20170301124419.GQ34777@mdounin.ru> Hello! On Wed, Mar 01, 2017 at 07:32:03AM -0500, user384829 wrote: > Hi, > I am wanting to resolve upstream hostnames used for proxy_pass inline with > the TTL of the DNS record but when I add this configuration the response > from nginx is MUCH slower. > I've run tcpdump and the upstream DNS record is be resolved and reresolved > as TTL would dictate. > But requests are slower even when the TTL has not expired and nginx is not > attempting to resolve the record. > > The difference is quite large: 0.3s vs 1s. > > Measured like this: > curl -w "%{response_code} %{time_total}\n" -s -o /dev/null > 'http://my_nginx_host/api/something_something > > > == Slow Configuration === > > resolver 8.8.8.8 8.8.4.4 ipv6=off; > server { > listen 80; > set $upstream_host https://my.upstream-host.com; > location ~ /api/ { > rewrite /api/(.*) /$1 break; > proxy_pass $upstream_host; > } > } [...] > What am I missing here? Is it because the upstream host is using HTTPS? I am > using version 1.11.10. Yes. You are using https and you are using dynamic address resolution, so SSL session caching doesn't work. See detailed explaination in the response as send on this mailing list yesterday, http://mailman.nginx.org/pipermail/nginx/2017-February/053042.html. -- Maxim Dounin http://nginx.org/ From luky-37 at hotmail.com Wed Mar 1 14:04:27 2017 From: luky-37 at hotmail.com (Lukas Tribus) Date: Wed, 1 Mar 2017 14:04:27 +0000 Subject: AW: IPv6 upstream problem In-Reply-To: References: <44b29027-0666-bc2d-7442-5184caca747f@xtremenitro.org>, Message-ID: > Did anyone have a solution for this? I also have many of these errors logged > because I am using Google Container Engine that does not support IPv6. Try ?man gai.conf? to configure getaddrinfo behavior [1]. You could also try forcing a ipv6=no nginx resolver by using a variable: set $blablaserver "dual-stack-ipv4-ipv6.xtremenitro.org"; server $blablaserver; Not sure if that works with upstream servers, it does work with proxy_pass. cheers, lukas [1] http://man7.org/linux/man-pages/man5/gai.conf.5.html From nginx-forum at forum.nginx.org Wed Mar 1 14:05:32 2017 From: nginx-forum at forum.nginx.org (user384829) Date: Wed, 01 Mar 2017 09:05:32 -0500 Subject: Large latency increase when using a resolver for proxy_pass In-Reply-To: <20170301124419.GQ34777@mdounin.ru> References: <20170301124419.GQ34777@mdounin.ru> Message-ID: Hi Maxim, OK thanks for the reply but how do I resolve this? Am I supposed to use upstream blocks with hostnames in them? My impression is that these won't be resolved inline with the TTL. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272679,272684#msg-272684 From mdounin at mdounin.ru Wed Mar 1 15:07:43 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 1 Mar 2017 18:07:43 +0300 Subject: Large latency increase when using a resolver for proxy_pass In-Reply-To: References: <20170301124419.GQ34777@mdounin.ru> Message-ID: <20170301150743.GS34777@mdounin.ru> Hello! On Wed, Mar 01, 2017 at 09:05:32AM -0500, user384829 wrote: > OK thanks for the reply but how do I resolve this? Am I supposed to use > upstream blocks with hostnames in them? My impression is that these won't be > resolved inline with the TTL. If you control your backends, consider using normal nginx procedure with configuration reload to re-resolve names instead. Alternatively, consider using plain http to communicate with backends, or tuning SSL for faster handshakes (using ECDSA certs on backends will help a lot). A perfect solution for your specific case would probably be an upstream block with "server ... resolve", but this functionality is only available in nginx-plus (http://nginx.org/en/docs/http/ngx_http_upstream_module.html#resolve). -- Maxim Dounin http://nginx.org/ From anonymouscross at gmail.com Wed Mar 1 16:20:33 2017 From: anonymouscross at gmail.com (Anonymous cross) Date: Wed, 1 Mar 2017 10:20:33 -0600 Subject: Nginx support - transparent Forward proxy for internet traffic and reverse proxy for local Web server traffic Message-ID: Hi All, We are planning to integrate Nginx in our router. Basically, we wanted to confirm whether Nginx supports the below functionalities 1) As a reverse proxy + Forward transparent proxy - To route the traffic from specific domains to a local web server and all other traffic to public internet 2) Websocket support over reverse proxy - To support WebSocket communication between client and local web server in Nginx reverse proxy mode -- Regards, John -------------- next part -------------- An HTML attachment was scrubbed... URL: From santosh.kumar.mahapatra at gmail.com Thu Mar 2 01:30:49 2017 From: santosh.kumar.mahapatra at gmail.com (Santosh Mahapatra) Date: Wed, 1 Mar 2017 17:30:49 -0800 Subject: Support for TLS extended master secret extension Message-ID: Hello, Does nginx support TLS extended master secret extension(RFC 7627: https://tools.ietf.org/html/rfc7627 )? If so what directive is available to enable or disable it? Thanks, Santosh -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Mar 2 01:49:49 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 2 Mar 2017 04:49:49 +0300 Subject: Support for TLS extended master secret extension In-Reply-To: References: Message-ID: <20170302014949.GA34777@mdounin.ru> Hello! On Wed, Mar 01, 2017 at 05:30:49PM -0800, Santosh Mahapatra wrote: > Does nginx support TLS extended master secret extension(RFC 7627: > https://tools.ietf.org/html/rfc7627 )? > > If so what directive is available to enable or disable it? OpenSSL supports Extended Master Secret extension starting with OpenSSL 1.1.0, and there is no way to disable it. So in nginx you'll see Extended Master Secret extension supported once you are running nginx with OpenSSL 1.1.0 or later. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Thu Mar 2 10:17:22 2017 From: nginx-forum at forum.nginx.org (p0lak) Date: Thu, 02 Mar 2017 05:17:22 -0500 Subject: One NGINX server to 2 backend servers In-Reply-To: References: Message-ID: <93bf2e043569cbbc5ba30128cadacbbe.NginxMailingListEnglish@forum.nginx.org> Yes, you're right ! I forgot proxy_pass statement but do not know why I cannot set proxy_pass with the full path name as http://mysite/forum/index.html Proxy_pass statement redirect me to root of my virtualhost so http://mysite/ Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272606,272708#msg-272708 From nginx-forum at forum.nginx.org Thu Mar 2 14:40:28 2017 From: nginx-forum at forum.nginx.org (polder_trash) Date: Thu, 02 Mar 2017 09:40:28 -0500 Subject: Balancing NGINX reverse proxy Message-ID: <3a79824d76eaffe5907a5d8d7a0d450c.NginxMailingListEnglish@forum.nginx.org> Hi, I have been reading the documentation and also searching this forum for a while, but could not find an answer to my question. Currently, I have a 2 NGINX nodes acting as a reverse proxy (in a failover setup using keepalived). The revproxy injects an authentication header, for an online website (transport is https). As the number of users grows, the load on the current machine starts to get uncomfortably high and I would like to be able to spread the load over both nodes. What would be the best way to set this up? I already tried adding both IP addresses to the DNS. But this, rather predictably, only sent a handful of users to the secondary node. I now plan about setting up an NGINX node in front of these revproxy nodes, acting as a round-robin load balancer. Will this work? Given the fact that traffic is over HTTPS, terminating the request will probably put all the load on the load balancer and therefore does not solve my issue. Your advice and help is greatly appreciated. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272713,272713#msg-272713 From dewanggaba at xtremenitro.org Thu Mar 2 16:34:55 2017 From: dewanggaba at xtremenitro.org (Dewangga Bachrul Alam) Date: Thu, 2 Mar 2017 23:34:55 +0700 Subject: AW: IPv6 upstream problem In-Reply-To: References: <44b29027-0666-bc2d-7442-5184caca747f@xtremenitro.org> Message-ID: <129c79b3-799e-e2b2-bae6-cde4e503b099@xtremenitro.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Hello! Yes I've force my system to read IPv4 first. But, just curios, why IPv6 upstream can't serve the traffic? If I access the IP Address using browser, it's normal. I am using Cent OS 7. On 03/01/2017 09:04 PM, Lukas Tribus wrote: >> Did anyone have a solution for this? I also have many of these >> errors logged because I am using Google Container Engine that >> does not support IPv6. > > Try ?man gai.conf? to configure getaddrinfo behavior [1]. > > > You could also try forcing a ipv6=no nginx resolver by using a > variable: > > set $blablaserver "dual-stack-ipv4-ipv6.xtremenitro.org"; server > $blablaserver; > > Not sure if that works with upstream servers, it does work with > proxy_pass. > > > cheers, lukas > > [1] http://man7.org/linux/man-pages/man5/gai.conf.5.html > _______________________________________________ nginx mailing list > nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx > -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQI4BAEBCAAiBQJYuEmoGxxkZXdhbmdnYWJhQHh0cmVtZW5pdHJvLm9yZwAKCRDl f9IgoCjNcK0SEACqb3fBlvvN7K5rf+3xQ7Cr7CssL0XnKwAoF/UbtvLZITsHcxyy /FO2xqKQ9QG5eTqtMCPMkG+VoyZmpOBbzErYyF4+8AKPFlADyuk3zC3W0Wfpy2TE FXUOBL/fQvJa3F4HW1q7YTLxt2TUyPXxdjZkVaZTAaExpJRDZWuzq00l2ycmvLPL RmvVy14p+1XFZSEzHcn0pC49OCrhPHv8Moo7e+HbLS3+f/LYAyKT7tzVtdn9UXYO sBf4k46ZoVW2kYBz7NgQ6RMSMbaAySStSbTPUifmQFR3h87a6mHpDzJ7DK9VDh1v W37ROP0SWBs2BpA3iPwe9z+dGp+a2jmx3FbZCjnD5A++eGapY6+mqSFFdDM6fZmf y2e9mM5nFQi0aRw+M53a2UI6HcE0pxQLBG1cpPLGtK6PMcB1dOTVvrZHN2R+HIAp zDsh90r08u/Ntxj8M6eapLrhGQz4xKP3MoJJhtUON3fxiP5j6Edyf5+Z+XOpIyf/ 2ipoEJIt+0zc5fJcHVbmqrXkDE0GPeYlxpIqCb4LIRpxLZ4DsHLNZGlAtPCTudgS XCUiYduGSgir+Ga02Knprbk01OS5Jczg/ZtWtcIO4CH0A8aYKMB3gnuQhAMYHI6P uaxMFIiq04KL9g+k763U0pR0y+URXRU+7Y2ZAVeVaIDDk4Y+MCr85Y7WyQ== =PHsr -----END PGP SIGNATURE----- From paulr at rcom-software.com Thu Mar 2 18:35:22 2017 From: paulr at rcom-software.com (Paul Romero) Date: Thu, 2 Mar 2017 10:35:22 -0800 Subject: MySQL Access w/ Nginx Message-ID: <58B865EA.4050509@rcom-software.com> Dear Nginx Community: Do you think NGinx is a viable and advisable solution for providing MySQL server access to my application ? The basic requirements and goals of the application are described below. Although, NGinx is classified as a Web Server which can act as a Reverse Proxy or Load Balancer, my application does not need exactly that kind of functionality in the short term. The short term need is to allow mobile platforms to access a single MySQL server. Eventually, there will be multiple MySQL servers and load balancing and failure fallback will be issues, and perahs caching. That means the basic architecture is as follows. | Mobile | <--> Internet <--> | NGinx | <--> | MySQL | <--> | MySQL | | System | (TCP/IP) | Backend | | Server| Initially NGinx, the MySQL Backend, and MySQL Server will all be on the same Linux host. My main concern is how the MySQL Backend fits and operates within that architecture. (i.e. I am not sure about the correct terminology for the MySQL Backend.) I assume, but am not sure, it can interact with the NGinx without additional components, such as Drupal. The basic requirement is the ability to perform remote MySQL queries and operations with syntax and semantics which are virtually the same as the corresponding manual operations. However, the remote system does not need to use the same syntax and semantics as the module that performs MySQL operations. Also, smooth interaction with LAMP PHP and MySQL components is a requirement. (i.e. I think Apache is not an issue.) Note that application clients will put a large volume of data into the MySQL database and interaction with a Web Server is not an issue at this point. The priority is to allow a mobile system such as an Android, and eventually an Apple, to access an MySQL server on a Unix/Linux system securely. However, the priority for the same functionality in a conventional Internet host is almost as high. The essential connection and authentication requirements are as follows. * SSL encryption/authentication * MySQL authentication * No passwords etc. are transmitted in the open. * Support for multiple concurrent connections from the same or multiple systems. * Each remote MySQL user must perform SSL authentication separately and there is 1-1 relationship between the SSL and MySQL authentication data. Best Regards, Paul R. -- Paul Romero ----------- RCOM Communications Software EMAIL: paulr at rcom-software.com PHONE: (510)482-2769 From nginx-forum at forum.nginx.org Thu Mar 2 20:40:55 2017 From: nginx-forum at forum.nginx.org (itpp2012) Date: Thu, 02 Mar 2017 15:40:55 -0500 Subject: MySQL Access w/ Nginx In-Reply-To: <58B865EA.4050509@rcom-software.com> References: <58B865EA.4050509@rcom-software.com> Message-ID: <87093893ed896614fb50dd8aa84ffb5e.NginxMailingListEnglish@forum.nginx.org> With Lua you will find many examples how to interact (non-blocking) with mysql, Lua also contains a load balancer which can be adapted to loadbalance sql instances. See also https://www.google.nl/#q=nginx+lua+mysql Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272718,272719#msg-272719 From alex at samad.com.au Thu Mar 2 22:06:04 2017 From: alex at samad.com.au (Alex Samad) Date: Fri, 3 Mar 2017 09:06:04 +1100 Subject: Balancing NGINX reverse proxy In-Reply-To: <3a79824d76eaffe5907a5d8d7a0d450c.NginxMailingListEnglish@forum.nginx.org> References: <3a79824d76eaffe5907a5d8d7a0d450c.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi if I am reading this right, you currently have too much load on 1 nginx server and you wish to releave this by adding another nginx server in front of it ? What I have is 2 nodes, but I use pacemaker instead of keepalive - i like it as a better solution - but thats for another thread. what you can do with pacemaker is have 1 ip address distributed between multiple machines - up to 16 nodes I think from memory. It uses linux iptables module for doing this. It is dependant on the source ip's being distributed, if there are clamp together the hashing algo will not make it any better. How many requests are you getting to overload nginx - i thought it was able to handle very large amounts ? are your nodes big enough do you need more CPU's or memory or ?? Alex On 3 March 2017 at 01:40, polder_trash wrote: > Hi, > > I have been reading the documentation and also searching this forum for a > while, but could not find an answer to my question. > Currently, I have a 2 NGINX nodes acting as a reverse proxy (in a failover > setup using keepalived). The revproxy injects an authentication header, for > an online website (transport is https). > > As the number of users grows, the load on the current machine starts to get > uncomfortably high and I would like to be able to spread the load over both > nodes. > > What would be the best way to set this up? > > I already tried adding both IP addresses to the DNS. But this, rather > predictably, only sent a handful of users to the secondary node. > I now plan about setting up an NGINX node in front of these revproxy nodes, > acting as a round-robin load balancer. Will this work? Given the fact that > traffic is over HTTPS, terminating the request will probably put all the > load on the load balancer and therefore does not solve my issue. > > Your advice and help is greatly appreciated. > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,272713,272713#msg-272713 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From al-nginx at none.at Thu Mar 2 22:25:11 2017 From: al-nginx at none.at (Aleksandar Lazic) Date: Thu, 02 Mar 2017 23:25:11 +0100 Subject: Issue about nginx removing the header "Connection" in HTTP response? In-Reply-To: <20170301072932.CFCB94C09D6@webmail.sinamail.sina.com.cn> References: <20170301072932.CFCB94C09D6@webmail.sinamail.sina.com.cn> Message-ID: Hi. Am 01-03-2017 08:29, schrieb tjlp at sina.com: > Hi, nginx guy, > > In our system, for some special requests, the upstream server will > return a response which the header includes "Connection: Close". > According to HTTP protocol, "Connection" is one-hop header. > So, nginx will remove this header and the client can't do the business > logic correctly. > > How to handle this scenario? you mean something like this? http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_set_header If the value of a header field is an empty string then this field will not be passed to a proxied server: proxy_set_header Connection ""; > Thanks > Liu Peng > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From al-nginx at none.at Thu Mar 2 22:37:24 2017 From: al-nginx at none.at (Aleksandar Lazic) Date: Thu, 02 Mar 2017 23:37:24 +0100 Subject: NGINX - Reverse Proxy With Authentication at 2 Layers In-Reply-To: References: Message-ID: Hi. Am 01-03-2017 09:57, schrieb zaidahmd: > ** Problem Background ** > I have an application, say app-A, which is running on a private network > unreachable by public network. Now a new requirement needs to deliver > the > webpages of app-A to external users over public network. > > As a solution to expose app-A, I want to use NGINX as reverse proxy and > will > use two layers of authentication as explained below. Kindly advise if i > am > moving in the right direction in implementing the secure entry using > NGINX. > > Reference Images attached at the end of email. > > ** Authentication Level 1 ** NGINX Auth Service As a solution to > expose > app-A, I want to use NGINX as reverse proxy and API gateway for > External > users to access the application in internal network. Once NGINX > authenticates the request it will forward to app-A. For this you can use http://nginx.org/en/docs/http/ngx_http_auth_request_module.html > ** Authentication Level 2 ** App-A performs Authentication After > receiving request from nginx, app-A will perform its own > authentication, > ignoring that the request came pre-authenticated from NGINX. app-A will > perform the authentication as app-A is to be kept unaware of the new > NGINX > reverse proxy and app-A will continue to work as is. For this you will use http://nginx.org/en/docs/http/ngx_http_upstream_module.html > ** Problem Situation ** > NGINX Authentication service authenticates the request and sets a > session-id > in response so that it can identify the next request coming from the > same > client. As app-A also authenticates the request and puts the session-id > in > response. The problem here is that one session-id will get overriden by > the > other. > > Questions/Options in consideration : > > 1. (Image-ref-1) Is there anyway that I can configure NGINX to keep > both > the session-ids seperate in the request so that Auth service and app-A > can > recognise there own session informations for authenticated client. you an set the session id to another variable with. http://nginx.org/en/docs/http/ngx_http_auth_request_module.html#auth_request_set > 2. (image-Ref-2) If both the session info cannot be saved, then can > we > configure NGINX to store session-id response of app-A and auth service > both > in its memory and only send the session-id of auth service back to > client. > And when the request comes back with Auth Service's session-id, NGINX > should > correlate the session of App-A and forward App-A's session to app-A. > This > way the request would get authenticated at both layers. I assume you can safe the session-id in memcache with. http://nginx.org/en/docs/http/ngx_http_memcached_module.html > 3. Which solution can be performed from the above 2 ? I think both. I would prefer the second one because this could save some request on the auth service. > 4. Is it good approach to have 2 layers of authentication when > NGINX's > API gateway is used? If not then what configuration is required in > app-A to > not perform authentication for the requests coming from NGINX? > Application > environment java spring.? Due to the fact that you haven't told us which auth method the auth service can offer I suggest to use openid connect to perform a kind of SSO. There is a http://nginx.org/en/docs/http/ngx_http_auth_jwt_module.html which is part of the n+. If you don't want to buy n+ you can use the modules which I have mentioned above. The best way would be to adopt the app-A to be able to handle both situations. A available session-id, in your case the one from nginx, and no session-id. > ** Links to Images ** > Image-Ref-1 : http://i64.tinypic.com/27zbthj.gif > Image-Ref-2 : http://i63.tinypic.com/35a2lbp.png > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,272674,272674#msg-272674 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From zxcvbn4038 at gmail.com Fri Mar 3 00:38:39 2017 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Thu, 2 Mar 2017 19:38:39 -0500 Subject: Hiding PHP's WSOD with Nginx Message-ID: My employer uses Nginx in front of PHP-FPM to generate their web content. They have PHP's error reporting shut off in production so when something does go wrong in their PHP scripts they end up with a "White Screen Of Death". From a protocol level the white screen of death is a 200 response with no content. They were wondering if there was a way to detect the WSOD within Nginx and substitute their 500 error page. PHP-FPM typically uses chunked encoding so I don't think the content-length header is going to help me. Does anyone have any suggestions how I might best accomplish this? I looked around with Google but all of the hits were people wanting to turn on error reporting, or looking help getting php-fpm working with Nginx to begin with. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Mar 3 03:54:55 2017 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Thu, 02 Mar 2017 22:54:55 -0500 Subject: Hiding PHP's WSOD with Nginx In-Reply-To: References: Message-ID: <1dcc1d01d08096a33304ffcf0efd6f0e.NginxMailingListEnglish@forum.nginx.org> You should view http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_catch_stderr Might be what you seek for a empty blank page output or specific text that would be a Fatal error etc. CJ Ess Wrote: ------------------------------------------------------- > My employer uses Nginx in front of PHP-FPM to generate their web > content. > They have PHP's error reporting shut off in production so when > something > does go wrong in their PHP scripts they end up with a "White Screen Of > Death". From a protocol level the white screen of death is a 200 > response > with no content. They were wondering if there was a way to detect the > WSOD > within Nginx and substitute their 500 error page. > > PHP-FPM typically uses chunked encoding so I don't think the > content-length > header is going to help me. > > Does anyone have any suggestions how I might best accomplish this? > > I looked around with Google but all of the hits were people wanting to > turn > on error reporting, or looking help getting php-fpm working with Nginx > to > begin with. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272724,272726#msg-272726 From tjlp at sina.com Fri Mar 3 05:00:36 2017 From: tjlp at sina.com (tjlp at sina.com) Date: Fri, 03 Mar 2017 13:00:36 +0800 Subject: =?UTF-8?Q?=E5=9B=9E=E5=A4=8D=EF=BC=9ARe=3A_Issue_about_nginx_removing_the_?= =?UTF-8?Q?header_=22Connection=22_in_HTTP_response=3F?= Message-ID: <20170303050036.10E294C0955@webmail.sinamail.sina.com.cn> Hi, What I mention is the header in response from backend server. Your answer about proxy_set_header is the "Connection" header in request. Thanks Liu Peng ----- ???? ----- ????Aleksandar Lazic ????nginx at nginx.org ????tjlp at sina.com ???Re: Issue about nginx removing the header "Connection" in HTTP response? ???2017?03?03? 06?25? Hi. Am 01-03-2017 08:29, schrieb tjlp at sina.com: > Hi, nginx guy, > > In our system, for some special requests, the upstream server will > return a response which the header includes "Connection: Close". > According to HTTP protocol, "Connection" is one-hop header. > So, nginx will remove this header and the client can't do the business > logic correctly. > > How to handle this scenario? you mean something like this? http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_set_header If the value of a header field is an empty string then this field will not be passed to a proxied server: proxy_set_header Connection ""; > Thanks > Liu Peng > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From al-nginx at none.at Fri Mar 3 08:19:44 2017 From: al-nginx at none.at (Aleksandar Lazic) Date: Fri, 03 Mar 2017 09:19:44 +0100 Subject: =?UTF-8?Q?Re=3A_=E5=9B=9E=E5=A4=8D=EF=BC=9ARe=3A_Issue_about_nginx_removin?= =?UTF-8?Q?g_the_header_=22Connection=22_in_HTTP_response=3F?= In-Reply-To: <20170303050036.10E294C0955@webmail.sinamail.sina.com.cn> References: <20170303050036.10E294C0955@webmail.sinamail.sina.com.cn> Message-ID: Hi. then one directive upward. http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_hide_header Cheers aleks Am 03-03-2017 06:00, schrieb tjlp at sina.com: > Hi, > > What I mention is the header in response from backend server. Your answer about proxy_set_header is the "Connection" header in request. > > Thanks > Liu Peng > > ----- ???? ----- > ????Aleksandar Lazic > ????nginx at nginx.org > ????tjlp at sina.com > ???Re: Issue about nginx removing the header "Connection" in HTTP response? > ???2017?03?03? 06?25? > > Hi. > Am 01-03-2017 08:29, schrieb tjlp at sina.com: >> Hi, nginx guy, >> >> In our system, for some special requests, the upstream server will >> return a response which the header includes "Connection: Close". >> According to HTTP protocol, "Connection" is one-hop header. >> So, nginx will remove this header and the client can't do the business >> logic correctly. >> >> How to handle this scenario? > you mean something like this? > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_set_header > If the value of a header field is an empty string then this field will > not be passed to a proxied server: > proxy_set_header Connection ""; >> Thanks >> Liu Peng >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Mar 3 08:33:18 2017 From: nginx-forum at forum.nginx.org (polder_trash) Date: Fri, 03 Mar 2017 03:33:18 -0500 Subject: Balancing NGINX reverse proxy In-Reply-To: References: Message-ID: <25490575d14ffef383dbad6adc909775.NginxMailingListEnglish@forum.nginx.org> Alexsamad, I might not have been clear, allow me to try again: * currently 2 NGINX revproxy nodes, 1 active the other on standby in case node 1 fails. * Since I am injecting an authentication header into the request, the HTTPS request has to be offloaded at the node and introduces additional load compared to injecting into non-encrypted requests. * Current peak load ~60 concurrent requests, ~100% load on CPU. Concurrent requests expected to more than double, so revproxy will be bottleneck. The NGINX revproxies run as a VM and I can ramp up the machine specs a little bit, but I do not expect this to completely solve the issue here. Therefore I am looking for some method of spreading the requests over multiple backend revproxies, without the load balancer frontend having to deal with SSL offloading. >From the KEMP LoadMaster documentation I found that this technique is called SSL Passthrough. I am currently looking if that is also supported by NGINX. What do you think? Will this solve my issue? Am I on the wrong track? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272713,272729#msg-272729 From luky-37 at hotmail.com Fri Mar 3 09:02:48 2017 From: luky-37 at hotmail.com (Lukas Tribus) Date: Fri, 3 Mar 2017 09:02:48 +0000 Subject: AW: AW: IPv6 upstream problem In-Reply-To: <129c79b3-799e-e2b2-bae6-cde4e503b099@xtremenitro.org> References: <44b29027-0666-bc2d-7442-5184caca747f@xtremenitro.org> , <129c79b3-799e-e2b2-bae6-cde4e503b099@xtremenitro.org> Message-ID: > But, just curios, why IPv6 upstream can't serve the traffic? Because if you configure IPv6 on your system but don't have IPv6 connectivity, it will try and fail. > If I access the IP Address using browser, it's normal. Because the browser probably recognizes the broken configuration and works around it. Use curl to check, not a browser. From nginx-forum at forum.nginx.org Fri Mar 3 09:15:02 2017 From: nginx-forum at forum.nginx.org (abhipower.abhi) Date: Fri, 03 Mar 2017 04:15:02 -0500 Subject: Nginx configuration Issue Message-ID: <151f67fd79e076520392a0de438c2992.NginxMailingListEnglish@forum.nginx.org> I am using nginx-1.10.3 as a load balancer. In my architecture, I have two servers- Hostname - sal15062hkb152, IP Address - 172.15.54.116 Hostname - sal15062hkb184, IP Address - 172.15.54.105 I want both should work in active-passive mode with nginx. My application is running on both servers and client can access my application using URL - https://sal15062hkb152/views or https://sal15062hkb184/views My nginx.conf is - #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 80; server_name sal15062hkb152; #http://sal15062hkb152/oneview/; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # server { listen 443 ssl; server_name sal15062hkb152; #ssl_certificate cert.pem; #ssl_certificate_key cert.key; ssl_certificate iperspective.crt; ssl_certificate_key iperspective.key; ssl_session_cache shared:SSL:1m; ssl_session_timeout 5m; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; location / { root http://oneview; index index.html index.htm; } } } I am trying to access https://sal15062hkb152/views but I am getting "404 Not Found" in browser. I believe, my nginx.conf file configuration is not correct. Please let me know proper configuration for my nginx.conf file. Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272731,272731#msg-272731 From nginx-forum at forum.nginx.org Fri Mar 3 09:53:56 2017 From: nginx-forum at forum.nginx.org (user384829) Date: Fri, 03 Mar 2017 04:53:56 -0500 Subject: Large latency increase when using a resolver for proxy_pass In-Reply-To: <20170301150743.GS34777@mdounin.ru> References: <20170301150743.GS34777@mdounin.ru> Message-ID: <7a1fe9523d74469a4ccc9a66dd6c268f.NginxMailingListEnglish@forum.nginx.org> Hi Maxim, Thanks for the reply. What about if we used a stream/tcp proxy_pass with resolver? Something like this: resolver 8.8.8.8 8.8.4.4 ipv6=off; stream { server { listen localhost:81; set $upstream_host my.upstream-host.com; proxy_pass $upstream_host:443; } } server { listen 80; location ~ /api/ { rewrite /api/(.*) /$1 break; proxy_ssl_verify off; proxy_pass https://localhost:81; } } Would this work? Thanks, Max Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272679,272732#msg-272732 From dewanggaba at xtremenitro.org Fri Mar 3 09:59:03 2017 From: dewanggaba at xtremenitro.org (Dewangga Bachrul Alam) Date: Fri, 3 Mar 2017 16:59:03 +0700 Subject: Nginx configuration Issue In-Reply-To: <151f67fd79e076520392a0de438c2992.NginxMailingListEnglish@forum.nginx.org> References: <151f67fd79e076520392a0de438c2992.NginxMailingListEnglish@forum.nginx.org> Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Hello! On 03/03/2017 04:15 PM, abhipower.abhi wrote: > I am using nginx-1.10.3 as a load balancer. In my architecture, I > have two servers- > > Hostname - sal15062hkb152, IP Address - 172.15.54.116 Hostname - > sal15062hkb184, IP Address - 172.15.54.105 I want both should work > in active-passive mode with nginx. My application is running on > both servers and client can access my application using URL - > > https://sal15062hkb152/views or https://sal15062hkb184/views > > My nginx.conf is - > > #user nobody; worker_processes 1; > > #error_log logs/error.log; #error_log logs/error.log notice; > #error_log logs/error.log info; > > #pid logs/nginx.pid; > > > events { worker_connections 1024; } > > > http { include mime.types; default_type > application/octet-stream; > > #log_format main '$remote_addr - $remote_user [$time_local] > "$request" ' # '$status $body_bytes_sent > "$http_referer" ' # '"$http_user_agent" > "$http_x_forwarded_for"'; > > #access_log logs/access.log main; > > sendfile on; #tcp_nopush on; > > #keepalive_timeout 0; keepalive_timeout 65; > > #gzip on; > > server { listen 80; server_name sal15062hkb152; > > #http://sal15062hkb152/oneview/; > > #charset koi8-r; > > #access_log logs/host.access.log main; > > location / { root html; index index.html index.htm; } > > #error_page 404 /404.html; > > # redirect server error pages to the static page /50x.html # > error_page 500 502 503 504 /50x.html; location = /50x.html { > root html; } > > # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # > #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} > > # pass the PHP scripts to FastCGI server listening on > 127.0.0.1:9000 # #location ~ \.php$ { # root html; # > fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # > fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # > include fastcgi_params; #} > > # deny access to .htaccess files, if Apache's document root # > concurs with nginx's one # #location ~ /\.ht { # deny all; #} > } > > > # another virtual host using mix of IP-, name-, and port-based > configuration # #server { # listen 8000; # listen > somename:8080; # server_name somename alias another.alias; > > # location / { # root html; # index index.html > index.htm; # } #} > > > # HTTPS server # server { > > listen 443 ssl; server_name sal15062hkb152; > > #ssl_certificate cert.pem; #ssl_certificate_key cert.key; > > ssl_certificate iperspective.crt; ssl_certificate_key > iperspective.key; > > ssl_session_cache shared:SSL:1m; ssl_session_timeout 5m; > > ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; > [..] > location / { root http://oneview; index index.html index.htm; } What do you expect for this directive? root[1] directive should be on local path, if you want proxying response to another backend. try using proxy_pass[2]. [1] http://nginx.org/en/docs/http/ngx_http_core_module.html#root [2] http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass > } > > } > > I am trying to access https://sal15062hkb152/views but I am getting > "404 Not Found" in browser. I believe, my nginx.conf file > configuration is not correct. Please let me know proper > configuration for my nginx.conf file. Thanks > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,272731,272731#msg-272731 > > _______________________________________________ nginx mailing list > nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx > -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQI4BAEBCAAiBQJYuT5kGxxkZXdhbmdnYWJhQHh0cmVtZW5pdHJvLm9yZwAKCRDl f9IgoCjNcFnxD/oCSqZG6BLx2+i7qRDlPFtjs65yCc25UYTigT0GEKgzi9Sa9TsJ wcxnbreOtaZxjG6lM7L890mkZ7xaRCd5hZea4TfaBUKfUESzbCUi9mFQUh4yWhei RksO+CmYmbGZ1t8EFI+bzw7jJyNYyvsdHpEAjnLBs7NtWogsKMSV9e0G58E/XIts mfQUc5L0O4XeUL/DSD/+ie3Mx54Wdjw4SHr5XnogNpowvKRYqL76aOvjT0zxBFNX u1lHnRHR6m2t2dZZGnZ3jNswiZx1JI3P1gDFKHHV9PAZgV3poc6jU4WqvJuZLIDr 2IghYg8c7xwmPnvncNKOr0XNpBpiwyR9KWR3A+cWzGHzNxQJXaLIt55VLJxIP9uF v99xY6S6ONkRPyVywSWUa6eOr9cc0f0Un8sdqIWLyUcihR0JrTBTtl2Ycu8bR5g9 g8+kQQW6ryP9X79LB9hICoIeXwij4gWC8g7EfVnYmElKkRwp2msuVgc1FGvfW/9o HT6oBRw0V7JmcCTCVZ7ycmDsl1PdUvWm5/6LvUwlCYzFTyPFl7ZcDuGxF4bknVQM 8eEbwb/HgwLIq1jYc2GStv2WjNdLds9KsPIjggmKxqZPpk3IKMKLhA9BjcCitqsP 93KkxLCHC0SyVREEoX7aG3DOvYyHUrDPNi7t2L6nW9hnwJfHWaLtzS8kAA== =Ac+6 -----END PGP SIGNATURE----- From mdounin at mdounin.ru Fri Mar 3 12:41:41 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 3 Mar 2017 15:41:41 +0300 Subject: Large latency increase when using a resolver for proxy_pass In-Reply-To: <7a1fe9523d74469a4ccc9a66dd6c268f.NginxMailingListEnglish@forum.nginx.org> References: <20170301150743.GS34777@mdounin.ru> <7a1fe9523d74469a4ccc9a66dd6c268f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170303124141.GG34777@mdounin.ru> Hello! On Fri, Mar 03, 2017 at 04:53:56AM -0500, user384829 wrote: > What about if we used a stream/tcp proxy_pass with resolver? Something like > this: > > resolver 8.8.8.8 8.8.4.4 ipv6=off; > > stream { > server { > listen localhost:81; > set $upstream_host my.upstream-host.com; > proxy_pass $upstream_host:443; > } > } > > server { > listen 80; > location ~ /api/ { > rewrite /api/(.*) /$1 break; > proxy_ssl_verify off; > proxy_pass https://localhost:81; > } > } > > Would this work? The result will be exactly the same: it will work, but SSL sessions won't be cached. -- Maxim Dounin http://nginx.org/ From al-nginx at none.at Fri Mar 3 15:35:53 2017 From: al-nginx at none.at (Aleksandar Lazic) Date: Fri, 03 Mar 2017 16:35:53 +0100 Subject: [no subject] In-Reply-To: References: Message-ID: <21698758d05bb06b3d89346a4c60b33a@none.at> Hi Anto. Am 26-02-2017 23:32, schrieb Anto: > Hi Aleksandar, > > Thank you , my requirement is i need LB to redirect to same OHS server > where i have multiple httpd server's running. So you need a stickyness, right?! There is a rather long post with a stickiness description https://www.nginx.com/resources/deployment-guides/load-balance-apache-tomcat/#session-persistence-basic. Btw.: what's a OHS Server? BR Aleks > Regards > > "" Anto Telvin Mathew "" > > On Sat, Feb 25, 2017 at 4:29 AM, Aleksandar Lazic > wrote: > Hi Anto. > > Am 24-02-2017 19:03, schrieb Anto: > > Hi Team , > > Would like to know how i can configure Ngnix LB with SSL termination ? > In addition to the above would like to configure LB with multiple > httpd's with single IP. > Can you guide me how i can do the same with proxy pass ? > > Note : I have a single OHS server with 2 different httpd.conf files > listening to two different ports. > Need to configure LB with SSL termination to redirect to same servers . > > Need a step by step guideline help - thanks How about to start with > this blog post. > > https://www.nginx.com/resources/admin-guide/nginx-https-upstreams/ > > Best regards > Aleks From nginx-forum at forum.nginx.org Fri Mar 3 15:47:26 2017 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Fri, 03 Mar 2017 10:47:26 -0500 Subject: Nginx Map how to check value if empty Message-ID: <6ac5c02bbc781fee8e58adb6a4b04b44.NginxMailingListEnglish@forum.nginx.org> So I have the following Map map $http_cf_connecting_ip $client_ip_from_cf { default $http_cf_connecting_ip; } How can I make it so if the client did not send that $http_ header it makes $client_ip_from_cf variable value = $binary_remote_addr Not sure how to check in a map if that http header is present. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272744,272744#msg-272744 From tjlp at sina.com Sat Mar 4 08:12:42 2017 From: tjlp at sina.com (tjlp at sina.com) Date: Sat, 04 Mar 2017 16:12:42 +0800 Subject: =?UTF-8?B?5Zue5aSN77yaUmU6X+WbnuWkje+8mlJlOl9Jc3N1ZV9hYm91dF9uZ2lueF9yZW1v?= =?UTF-8?B?dmluZ190aGVfaGVhZGVyXyJDb25uZWN0aW9uIl9pbl9IVFRQX3Jlc3BvbnNl?= =?UTF-8?B?Pw==?= Message-ID: <20170304081242.16C9610200E3@webmail.sinamail.sina.com.cn> Hi, Alexks, I don't want to hide the header. My problem is that Nginx change the "Connection: close" header in the reponse from upstream server to "Connction: keep-alive" and send to client. I want to keep the original "Connection: close" header. Thanks Liu Peng ----- ???? ----- ????Aleksandar Lazic ????tjlp at sina.com ????nginx ???Re:_???Re:_Issue_about_nginx_removing_the_header_"Connection"_in_HTTP_response? ???2017?03?03? 16?19? Hi. then one directive upward. http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_hide_header Cheers aleks Am 03-03-2017 06:00, schrieb tjlp at sina.com: Hi, What I mention is the header in response from backend server. Your answer about proxy_set_header is the "Connection" header in request. Thanks Liu Peng ----- ???? ----- ????Aleksandar Lazic ????nginx at nginx.org ????tjlp at sina.com ???Re: Issue about nginx removing the header "Connection" in HTTP response? ???2017?03?03? 06?25? Hi. Am 01-03-2017 08:29, schrieb tjlp at sina.com: > Hi, nginx guy, > > In our system, for some special requests, the upstream server will > return a response which the header includes "Connection: Close". > According to HTTP protocol, "Connection" is one-hop header. > So, nginx will remove this header and the client can't do the business > logic correctly. > > How to handle this scenario? you mean something like this? http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_set_header If the value of a header field is an empty string then this field will not be passed to a proxied server: proxy_set_header Connection ""; > Thanks > Liu Peng > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sat Mar 4 08:14:33 2017 From: francis at daoine.org (Francis Daly) Date: Sat, 4 Mar 2017 08:14:33 +0000 Subject: Nginx Map how to check value if empty In-Reply-To: <6ac5c02bbc781fee8e58adb6a4b04b44.NginxMailingListEnglish@forum.nginx.org> References: <6ac5c02bbc781fee8e58adb6a4b04b44.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170304081433.GA15209@daoine.org> On Fri, Mar 03, 2017 at 10:47:26AM -0500, c0nw0nk wrote: Hi there, > map $http_cf_connecting_ip $client_ip_from_cf { > default $http_cf_connecting_ip; > } > > How can I make it so if the client did not send that $http_ header it makes > $client_ip_from_cf variable value = $binary_remote_addr > > Not sure how to check in a map if that http header is present. If the http header is absent, the matching variable is empty. So it will have the value "". Use that as the first half of your "map" pair. f -- Francis Daly francis at daoine.org From francis at daoine.org Sat Mar 4 08:26:15 2017 From: francis at daoine.org (Francis Daly) Date: Sat, 4 Mar 2017 08:26:15 +0000 Subject: Websocket, set Sec-Websocket-Protocol In-Reply-To: References: Message-ID: <20170304082615.GB15209@daoine.org> On Fri, Feb 24, 2017 at 01:24:11AM -0500, ashoba wrote: Hi there, > How I can set Sec-WebSocket-Protocol in config? > I've tried proxy_set_header Sec-WebSocket-Protocol "v10.stomp, v11.stomp"; > and add_header Sec-WebSocket-Protocol "v10.stomp, v11.stomp". > In response, I'm not getting 'Sec-WebSocket-Protocol' header. If you want to add a header to the response that nginx sends to a client using stock nginx, "add_header" (http://nginx.org/r/add_header) is the directive to use. That directive needs to apply within the location{} that nginx uses to handle the request that you make. And it only applies to certain http response codes, per the documentation. What is the config that you use that gives a response that you do not want? (There is also a non-stock "headers more" module that can set headers. Depending on your requirements, that may be interesting to you.) f -- Francis Daly francis at daoine.org From francis at daoine.org Sat Mar 4 08:43:59 2017 From: francis at daoine.org (Francis Daly) Date: Sat, 4 Mar 2017 08:43:59 +0000 Subject: One NGINX server to 2 backend servers In-Reply-To: References: Message-ID: <20170304084359.GC15209@daoine.org> On Fri, Feb 24, 2017 at 06:17:51AM -0500, p0lak wrote: Hi there, > I want to define this behavior > > NGINX Server (Public IP) >>> listening on port 80 >>> redirect to a LAN > Server on port 80 (http://mylocalserver/virtualhost1) > NGINX Server (Public IP) >>> listening on port 81 >>> redirect to a LAN > Server on port 80 (http://mylocalserver/virtualhost2) Some of the terms you use here do have specific meanings in a http context, and it may or may not be the case that you mean the same thing. For clarification: if you access your two internal web areas directly, do you access things like http://servername/directory1/ and http://servername/directory2/ or http://servername1/ and http://servername2/ ? (You do say the first; but the term "virtual host" would usually refer to the second. And the nginx config will differ depending on which it is.) If it is the first, then it is probably easier for you just to have nginx listening on public port 80, and accept requests for /directory1/ and /directory2/ and proxy_pass them as-is to the internal server. If instead you want one nginx listener on port 80 and another on port 81, then you will possibly have to make sure that the internal web areas are happy with being proxy_pass'ed like that -- they may need extra configuration to know the public url that refers to them if they create absolute urls within the html they return, for example. Cheers, f -- Francis Daly francis at daoine.org From al-nginx at none.at Sat Mar 4 09:21:51 2017 From: al-nginx at none.at (Aleksandar Lazic) Date: Sat, 04 Mar 2017 10:21:51 +0100 Subject: =?UTF-8?B?UmU6IOWbnuWkje+8mlJlOl/lm57lpI3vvJpSZTpfSXNzdWVfYWJvdXRfbmdpbnhf?= =?UTF-8?B?cmVtb3ZpbmdfdGhlX2hlYWRlcl8iQ29ubmVjdGlvbiJfaW5fSFRUUF9yZXNw?= =?UTF-8?B?b25zZT8=?= In-Reply-To: <20170304081242.16C9610200E3@webmail.sinamail.sina.com.cn> References: <20170304081242.16C9610200E3@webmail.sinamail.sina.com.cn> Message-ID: <1edf47702a52c8afe938e5e6a632dbdc@none.at> Hi Liu Peng. Am 04-03-2017 09:12, schrieb tjlp at sina.com: > > Hi, Alexks, > > I don't want to hide the header. > My problem is that Nginx change the "Connection: close" header in the > reponse from upstream server to "Connction: keep-alive" and send to > client. I want to keep the original "Connection: close" header. Ah that's a clear question. It took us only 3 rounds to get to this clear question ;-) So now the standard Questions from me: What's the output of nginx -V ? What's your config? Maybe you have set 'keepalive' in the upstream config http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive or 'proxy_http_version 1.1;' http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_http_version as a last resort you can just pass the header with 'proxy_pass_header Connection;'. http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass_header Choose the solution which fit's to your demand. I can only guess due to the fact that we don't know your config. May I ask you to take a look into this document, which exists in several languages, thank you very much. http://www.catb.org/~esr/faqs/smart-questions.html Best regards Aleks > Thanks > Liu Peng > > ----- ???? ----- > ????Aleksandar Lazic > ????tjlp at sina.com > ????nginx > ???Re:_???Re:_Issue_about_nginx_removing_the_header_"Connection"_in_HTTP_response? > ???2017?03?03? 16?19? > Hi. > > then one directive upward. > > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_hide_header > > Cheers > > aleks > > Am 03-03-2017 06:00, schrieb tjlp at sina.com: > >> Hi, >> >> What I mention is the header in response from backend server. Your >> answer about proxy_set_header is the "Connection" header in request. >> >> Thanks >> Liu Peng >> >> ----- ???? ----- >> ????Aleksandar Lazic >> ????nginx at nginx.org >> ????tjlp at sina.com >> ???Re: Issue about nginx removing the header "Connection" in HTTP >> response? >> ???2017?03?03? 06?25? >> >> Hi. >> Am 01-03-2017 08:29, schrieb tjlp at sina.com: >>> Hi, nginx guy, >>> >>> In our system, for some special requests, the upstream server will >>> return a response which the header includes "Connection: Close". >>> According to HTTP protocol, "Connection" is one-hop header. >>> So, nginx will remove this header and the client can't do the >>> business >>> logic correctly. >>> >>> How to handle this scenario? >> you mean something like this? >> http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_set_header >> If the value of a header field is an empty string then this field will >> not be passed to a proxied server: >> proxy_set_header Connection ""; >>> Thanks >>> Liu Peng >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Sat Mar 4 12:03:09 2017 From: francis at daoine.org (Francis Daly) Date: Sat, 4 Mar 2017 12:03:09 +0000 Subject: How to cache image urls with query strings? In-Reply-To: <547156602b6ad415d4e04574b73d59f4.NginxMailingListEnglish@forum.nginx.org> References: <547156602b6ad415d4e04574b73d59f4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170304120309.GD15209@daoine.org> On Fri, Feb 24, 2017 at 07:33:19AM -0500, 0liver wrote: Hi there, > Can anybody point to me to the necessary pieces of information to get > caching for resources with query strings to work? There should be nothing special needed on the nginx side for this to work -- it caches what it is told to cache by configuration and response from upstream. Since the only change you made to the requests is the query string, the most likely thing is that the upstream server sends different response headers for requests with and without query-strings. Possibly the response from upstream now has a "do not cache this" indication. Can you compare the http responses from the upstream server for the old and new cases? If it does say "do not cache", then you can either configure it not to say that; or configure nginx not to agree to that part of the response, as was shown in an earlier reply. If it does not, then more investigation is needed. Good luck with it, f -- Francis Daly francis at daoine.org From tjlp at sina.com Sat Mar 4 15:57:51 2017 From: tjlp at sina.com (tjlp at sina.com) Date: Sat, 04 Mar 2017 23:57:51 +0800 Subject: =?UTF-8?B?5Zue5aSN77yaUmU6X+WbnuWkje+8mlJlOl/lm57lpI3vvJpSZTpfSXNzdWVfYWJv?= =?UTF-8?B?dXRfbmdpbnhfcmVtb3ZpbmdfdGhlX2hlYWRlcl8iQ29ubmVjdGlvbiJfaW5f?= =?UTF-8?B?SFRUUF9yZXNwb25zZT8=?= Message-ID: <20170304155751.5A43B102056C@webmail.sinamail.sina.com.cn> Hi, Aleks, Actually I read what you mention. The document about "proxy_pass_header" just pass the headers listed in "proxy_hide_header" which do not include "Connection", so I think it might doesn't work. I will try this. BTW, this module ngx_http_upstream_module should be built by default right, because the directive proxy_pass is supported by this module. My output of "nginx -V" doesn't not include this module. Thanks Liu Peng ----- ???? ----- ????Aleksandar Lazic ????tjlp at sina.com ????nginx ???Re:_???Re:_???Re:_Issue_about_nginx_removing_the_header_"Connection"_in_HTTP_response? ???2017?03?04? 17?22? Hi Liu Peng. Am 04-03-2017 09:12, schrieb tjlp at sina.com: > > Hi, Alexks, > > I don't want to hide the header. > My problem is that Nginx change the "Connection: close" header in the > reponse from upstream server to "Connction: keep-alive" and send to > client. I want to keep the original "Connection: close" header. Ah that's a clear question. It took us only 3 rounds to get to this clear question ;-) So now the standard Questions from me: What's the output of nginx -V ? What's your config? Maybe you have set 'keepalive' in the upstream config http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive or 'proxy_http_version 1.1;' http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_http_version as a last resort you can just pass the header with 'proxy_pass_header Connection;'. http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass_header Choose the solution which fit's to your demand. I can only guess due to the fact that we don't know your config. May I ask you to take a look into this document, which exists in several languages, thank you very much. http://www.catb.org/~esr/faqs/smart-questions.html Best regards Aleks > Thanks > Liu Peng > > ----- ???? ----- > ????Aleksandar Lazic > ????tjlp at sina.com > ????nginx > ???Re:_???Re:_Issue_about_nginx_removing_the_header_"Connection"_in_HTTP_response? > ???2017?03?03? 16?19? > Hi. > > then one directive upward. > > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_hide_header > > Cheers > > aleks > > Am 03-03-2017 06:00, schrieb tjlp at sina.com: > >> Hi, >> >> What I mention is the header in response from backend server. Your >> answer about proxy_set_header is the "Connection" header in request. >> >> Thanks >> Liu Peng >> >> ----- ???? ----- >> ????Aleksandar Lazic >> ????nginx at nginx.org >> ????tjlp at sina.com >> ???Re: Issue about nginx removing the header "Connection" in HTTP >> response? >> ???2017?03?03? 06?25? >> >> Hi. >> Am 01-03-2017 08:29, schrieb tjlp at sina.com: >>> Hi, nginx guy, >>> >>> In our system, for some special requests, the upstream server will >>> return a response which the header includes "Connection: Close". >>> According to HTTP protocol, "Connection" is one-hop header. >>> So, nginx will remove this header and the client can't do the >>> business >>> logic correctly. >>> >>> How to handle this scenario? >> you mean something like this? >> http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_set_header >> If the value of a header field is an empty string then this field will >> not be passed to a proxied server: >> proxy_set_header Connection ""; >>> Thanks >>> Liu Peng >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sat Mar 4 21:52:36 2017 From: nginx-forum at forum.nginx.org (173279834462) Date: Sat, 04 Mar 2017 16:52:36 -0500 Subject: conditional expression Message-ID: <79e009fd1882239dd89951e73209d5a4.NginxMailingListEnglish@forum.nginx.org> Hello, Our local policy demands the rejection of any query; we do this as follows: if ($is_args) { return 301 /; } The introduction of Thunderbird autoconfiguration demands an exception to the above policy, because of "GET /.well-known/autoconfig/mail/config-v1.1.xml?emailaddre=uname%40example.com". The resulting rule would be if (($is_args) && ($args !~ emailaddress=.+%40[a-zA-Z0-9\.\-]+)) { return 301 /; } The rule does not work, because nginx does not parse the AND condition. Of course, you cannot just remove $is_args, because $args is usually empty. The alternative would be if ($args ~ emailaddress=.+%40[a-zA-Z0-9\.\-]+)) { allow all; } else { return 301 /; }, but nginx does not parse if-then-else statements. Are we stuck in the cage? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272758,272758#msg-272758 From nginx-forum at forum.nginx.org Sat Mar 4 22:35:21 2017 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Sat, 04 Mar 2017 17:35:21 -0500 Subject: Nginx Map how to check value if empty In-Reply-To: <20170304081433.GA15209@daoine.org> References: <20170304081433.GA15209@daoine.org> Message-ID: Thank's Francis much appreciated it seems to be working good :) Francis Daly Wrote: ------------------------------------------------------- > On Fri, Mar 03, 2017 at 10:47:26AM -0500, c0nw0nk wrote: > > Hi there, > > > map $http_cf_connecting_ip $client_ip_from_cf { > > default $http_cf_connecting_ip; > > } > > > > How can I make it so if the client did not send that $http_ header > it makes > > $client_ip_from_cf variable value = $binary_remote_addr > > > > Not sure how to check in a map if that http header is present. > > If the http header is absent, the matching variable is empty. So it > will > have the value "". > > Use that as the first half of your "map" pair. > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272744,272759#msg-272759 From kevin at nginx.com Sun Mar 5 03:06:03 2017 From: kevin at nginx.com (Kevin Jones) Date: Sat, 4 Mar 2017 19:06:03 -0800 Subject: conditional expression In-Reply-To: <79e009fd1882239dd89951e73209d5a4.NginxMailingListEnglish@forum.nginx.org> References: <79e009fd1882239dd89951e73209d5a4.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi, You would be better to accomplish this with map. Would this work? map $args $redirect { ~emailaddress=.+%40[a-zA-Z0-9\.\-] ''; default "1"; } server { ... location / { ... } location /foo { if ($redirect) { return 301 /; } ... } } On Sat, Mar 4, 2017 at 1:52 PM, 173279834462 wrote: > Hello, > > Our local policy demands the rejection of any query; we do this as follows: > if ($is_args) { return 301 /; } > > The introduction of Thunderbird autoconfiguration demands an exception to > the above policy, because of > "GET > /.well-known/autoconfig/mail/config-v1.1.xml?emailaddre=uname% > 40example.com". > > The resulting rule would be > > if (($is_args) && ($args !~ emailaddress=.+%40[a-zA-Z0-9\.\-]+)) { return > 301 /; } > > The rule does not work, because nginx does not parse the AND condition. > > Of course, you cannot just remove $is_args, because $args is usually empty. > > > The alternative would be if ($args ~ emailaddress=.+%40[a-zA-Z0-9\.\-]+)) > { > allow all; } else { return 301 /; }, > but nginx does not parse if-then-else statements. > > Are we stuck in the cage? > > Posted at Nginx Forum: https://forum.nginx.org/read.p > hp?2,272758,272758#msg-272758 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin at nginx.com Sun Mar 5 03:20:11 2017 From: kevin at nginx.com (Kevin Jones) Date: Sat, 4 Mar 2017 19:20:11 -0800 Subject: conditional expression In-Reply-To: References: <79e009fd1882239dd89951e73209d5a4.NginxMailingListEnglish@forum.nginx.org> Message-ID: Also, It might be better to check the $arg_*argument *variable instead and also set a check for $is_args. NGINX will process them in order within the configuration. map $is_args $redirect { "?" "1"; default ""; } map $arg_emailaddress $redirect { ~.+%40[a-zA-Z0-9\.\-] ''; default "1"; } On Sat, Mar 4, 2017 at 7:06 PM, Kevin Jones wrote: > Hi, > > You would be better to accomplish this with map. Would this work? > > map $args $redirect { > ~emailaddress=.+%40[a-zA-Z0-9\.\-] ''; > default "1"; > } > > server { > ... > location / { > ... > } > > location /foo { > if ($redirect) { return 301 /; } > ... > } > > } > > On Sat, Mar 4, 2017 at 1:52 PM, 173279834462 > wrote: > >> Hello, >> >> Our local policy demands the rejection of any query; we do this as >> follows: >> if ($is_args) { return 301 /; } >> >> The introduction of Thunderbird autoconfiguration demands an exception to >> the above policy, because of >> "GET >> /.well-known/autoconfig/mail/config-v1.1.xml?emailaddre=uname% >> 40example.com". >> >> The resulting rule would be >> >> if (($is_args) && ($args !~ emailaddress=.+%40[a-zA-Z0-9\.\-]+)) { return >> 301 /; } >> >> The rule does not work, because nginx does not parse the AND condition. >> >> Of course, you cannot just remove $is_args, because $args is usually >> empty. >> >> >> The alternative would be if ($args ~ emailaddress=.+%40[a-zA-Z0-9\.\-]+)) >> { >> allow all; } else { return 301 /; }, >> but nginx does not parse if-then-else statements. >> >> Are we stuck in the cage? >> >> Posted at Nginx Forum: https://forum.nginx.org/read.p >> hp?2,272758,272758#msg-272758 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -- Kevin Jones Technical Solutions Architect | NGINX, Inc. kevin at nginx.com Skype/Twitter @webopsx -------------- next part -------------- An HTML attachment was scrubbed... URL: From aldernetwork at gmail.com Sun Mar 5 05:58:57 2017 From: aldernetwork at gmail.com (Alder Netw) Date: Sat, 4 Mar 2017 21:58:57 -0800 Subject: questions on module developer Message-ID: Anybody here can tell me when these twe function init_process(), and exit_process() in a module (ngx_module_t) be called? init_process being called once when nginx starts or when a query matched with a location defined in nginx.conf is received? When would the exit_process() gets called? At process exits or never gets called as long as nginx is alive? Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: From dewanggaba at xtremenitro.org Sun Mar 5 11:29:57 2017 From: dewanggaba at xtremenitro.org (Dewangga Bachrul Alam) Date: Sun, 5 Mar 2017 18:29:57 +0700 Subject: stale-while-revalidate and stale-if-error implementation Message-ID: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Hello! I tried to use "stale-if-error=864000" and "stale-while-revalidate=864000" co-exist with "expires max;" directive. Is it possible? My configurations looks like : ... snip ... expires max; add_header Cache-Control "stale-while-revalidate=864000, stale-if-error=864000"; ... snip ... And header response return : cache-control:max-age=315360000 cache-control:stale-while-revalidate=864000, stale-if-error=864000 Is it ok and acceptable in major modern browser? Or, should I changes the header to : add_header Cache-Control "max-age=315360000, stale-while-revalidate=864000, stale-if-error=864000"; Any feedbacks and helps would be appreciated. Thanks in advance. -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQI4BAEBCAAiBQJYu/avGxxkZXdhbmdnYWJhQHh0cmVtZW5pdHJvLm9yZwAKCRDl f9IgoCjNcDy+D/0Vt2QvGF9RES6ZbWsyMdKiaQOWo11uU73ZDjqJNCTFbrlA6DgF XYBdqugrcrLn6zlMP95KRmRgyNHHDsrbnVya5Sj0PhNno61k7i56nZZpt2AQ7fii HaZ12PFIWB7RCa+Wbx4efj8PN5orhXVhg6R/bnOu5aN1ht0MLlcO0Y2mywQnSTZ1 kUsRWQZQRvH5gel/DewVYitvZ+a+FePLN2O/BgO6FoW0HNRmtySfTgKtHGARn+Zn itPXSsM6gNYteNWFeWWC+2NrCTMBcPtlP9IzSS43Htmnmlkih7nEpSwSPNGAiTsf 3d3h6yl27URmbP1P33P7ef8ohKdoa3CXBlvETLCMEct+LApuAlY8bqhpGoeI4tHX HfPo5g+cdI7l60bUQeQNQKTCe4NnfVXfKzp29Lj2WsHYlLfaaG46mbccUma4g665 dNk4ETWnk9ea3XPuFEx6x66j8JNW/+PBOJPvONZoiIWGp0fNmOiBhjMqIEEYGkI+ mFmqTQwmm2U7iRCM0umWvhnnqaSidjCviRYq/oeEH4sVULinM3SbUL5JSosa9+T2 xaAtqibqzCI9wnHQM7QFRdxY+E+h8EI3d8NFZrrfB9c0Y5CQMLX4GCCLQe6fHYCn AAqWshuqWo1FckQO83cbQwRDOolPF8TOi6WYzrNzJjKIKGz5mbbhivTnkw== =9/H5 -----END PGP SIGNATURE----- From nginx-forum at forum.nginx.org Sun Mar 5 16:31:32 2017 From: nginx-forum at forum.nginx.org (173279834462) Date: Sun, 05 Mar 2017 11:31:32 -0500 Subject: conditional expression In-Reply-To: References: Message-ID: <32bef57dcdf9f142ee4d84dbd03494ee.NginxMailingListEnglish@forum.nginx.org> Works for me (so far): map $query_string $bad_query { "~[^&;]+([&;][^&;]*){1,}" 1; # deny two or more parameters "~emailaddress=[^@]+%40[^@]+" 0; # allow Thunderbird autoconf "~.+=.+" 1; # deny any other query default 0; # allow (no parameters) } if ($bad_query = 1) { return 444; break; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272758,272764#msg-272764 From nginx-forum at forum.nginx.org Sun Mar 5 21:00:35 2017 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Sun, 05 Mar 2017 16:00:35 -0500 Subject: Nginx Map how to check value if empty In-Reply-To: <20170304081433.GA15209@daoine.org> References: <20170304081433.GA15209@daoine.org> Message-ID: <26f36df2afe4d8d0ca2bc04b5c582d51.NginxMailingListEnglish@forum.nginx.org> Francis Daly Wrote: ------------------------------------------------------- > On Fri, Mar 03, 2017 at 10:47:26AM -0500, c0nw0nk wrote: > > Hi there, > > > map $http_cf_connecting_ip $client_ip_from_cf { > > default $http_cf_connecting_ip; > > } > > > > How can I make it so if the client did not send that $http_ header > it makes > > $client_ip_from_cf variable value = $binary_remote_addr > > > > Not sure how to check in a map if that http header is present. > > If the http header is absent, the matching variable is empty. So it > will > have the value "". > > Use that as the first half of your "map" pair. > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Hey Francis, HTTP BLOCK log_format cf_custom 'CFIP:$http_cf_connecting_ip - $remote_addr - $remote_user [$time_local] ' '"$request" Status:$status $body_bytes_sent ' '"$http_referer" "$http_user_agent"' '$http_cf_ray'; map $status $loggable { ~^[23] 0; default 1; } access_log logs/access.log cf_custom if=$loggable; map $remote_addr $client_ip_from_cf { default $remote_addr; } Access.log output CFIP:- - 10.108.22.40 - - [05/Mar/2017:21:27:50 +0100] "GET Any idea why the remote_addr is empty surely it should display to me the clients IP address i am not proxying traffic or anything like that. I expected to see an IP there instead it seems the value is empty. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272744,272766#msg-272766 From reallfqq-nginx at yahoo.fr Sun Mar 5 21:37:50 2017 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sun, 5 Mar 2017 22:37:50 +0100 Subject: Nginx Map how to check value if empty In-Reply-To: <26f36df2afe4d8d0ca2bc04b5c582d51.NginxMailingListEnglish@forum.nginx.org> References: <20170304081433.GA15209@daoine.org> <26f36df2afe4d8d0ca2bc04b5c582d51.NginxMailingListEnglish@forum.nginx.org> Message-ID: That is because it is not: your eyes deceived you having a too quick look at the log line. Your 'empty' variables are actually showing the value '-' in this log line. It probably does not help debugging to have static '-' mixed in the format of your log lines where you put them. --- *B. R.* On Sun, Mar 5, 2017 at 10:00 PM, c0nw0nk wrote: > Francis Daly Wrote: > ------------------------------------------------------- > > On Fri, Mar 03, 2017 at 10:47:26AM -0500, c0nw0nk wrote: > > > > Hi there, > > > > > map $http_cf_connecting_ip $client_ip_from_cf { > > > default $http_cf_connecting_ip; > > > } > > > > > > How can I make it so if the client did not send that $http_ header > > it makes > > > $client_ip_from_cf variable value = $binary_remote_addr > > > > > > Not sure how to check in a map if that http header is present. > > > > If the http header is absent, the matching variable is empty. So it > > will > > have the value "". > > > > Use that as the first half of your "map" pair. > > > > f > > -- > > Francis Daly francis at daoine.org > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > Hey Francis, > > > HTTP BLOCK > log_format cf_custom 'CFIP:$http_cf_connecting_ip - $remote_addr - > $remote_user [$time_local] ' > '"$request" Status:$status $body_bytes_sent ' > '"$http_referer" "$http_user_agent"' > '$http_cf_ray'; > map $status $loggable { > ~^[23] 0; > default 1; > } > access_log logs/access.log cf_custom if=$loggable; > > map $remote_addr $client_ip_from_cf { > default $remote_addr; > } > > > Access.log output > CFIP:- - 10.108.22.40 - - [05/Mar/2017:21:27:50 +0100] "GET > > > Any idea why the remote_addr is empty surely it should display to me the > clients IP address i am not proxying traffic or anything like that. I > expected to see an IP there instead it seems the value is empty. > > Posted at Nginx Forum: https://forum.nginx.org/read.p > hp?2,272744,272766#msg-272766 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sun Mar 5 21:50:53 2017 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Sun, 05 Mar 2017 16:50:53 -0500 Subject: Nginx Map how to check value if empty In-Reply-To: References: Message-ID: <377e02fab51fe3a329b75a8079f30323.NginxMailingListEnglish@forum.nginx.org> Thank's for the info :) But why is $remote_addr outputting a hyphen instead of the users IP... I still expect to see the client's IP address. B.R. via nginx Wrote: ------------------------------------------------------- > That is because it is not: your eyes deceived you having a too quick > look > at the log line. > > Your 'empty' variables are actually showing the value '-' in this log > line. > It probably does not help debugging to have static '-' mixed in the > format > of your log lines where you put them. > --- > *B. R.* > > On Sun, Mar 5, 2017 at 10:00 PM, c0nw0nk > wrote: > > > Francis Daly Wrote: > > ------------------------------------------------------- > > > On Fri, Mar 03, 2017 at 10:47:26AM -0500, c0nw0nk wrote: > > > > > > Hi there, > > > > > > > map $http_cf_connecting_ip $client_ip_from_cf { > > > > default $http_cf_connecting_ip; > > > > } > > > > > > > > How can I make it so if the client did not send that $http_ > header > > > it makes > > > > $client_ip_from_cf variable value = $binary_remote_addr > > > > > > > > Not sure how to check in a map if that http header is present. > > > > > > If the http header is absent, the matching variable is empty. So > it > > > will > > > have the value "". > > > > > > Use that as the first half of your "map" pair. > > > > > > f > > > -- > > > Francis Daly francis at daoine.org > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > Hey Francis, > > > > > > HTTP BLOCK > > log_format cf_custom 'CFIP:$http_cf_connecting_ip - $remote_addr - > > $remote_user [$time_local] ' > > '"$request" Status:$status $body_bytes_sent ' > > '"$http_referer" "$http_user_agent"' > > '$http_cf_ray'; > > map $status $loggable { > > ~^[23] 0; > > default 1; > > } > > access_log logs/access.log cf_custom if=$loggable; > > > > map $remote_addr $client_ip_from_cf { > > default $remote_addr; > > } > > > > > > Access.log output > > CFIP:- - 10.108.22.40 - - [05/Mar/2017:21:27:50 +0100] "GET > > > > > > Any idea why the remote_addr is empty surely it should display to me > the > > clients IP address i am not proxying traffic or anything like that. > I > > expected to see an IP there instead it seems the value is empty. > > > > Posted at Nginx Forum: https://forum.nginx.org/read.p > > hp?2,272744,272766#msg-272766 > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272744,272768#msg-272768 From alex at samad.com.au Sun Mar 5 23:00:17 2017 From: alex at samad.com.au (Alex Samad) Date: Mon, 6 Mar 2017 10:00:17 +1100 Subject: Balancing NGINX reverse proxy In-Reply-To: <25490575d14ffef383dbad6adc909775.NginxMailingListEnglish@forum.nginx.org> References: <25490575d14ffef383dbad6adc909775.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi Firstly, I am fairly new to nginx. >From what I understand you have a standard sort of setup. 2 nodes (vm's) with haproxy, allowing nginx to be active / passive. You have SSL requests which once nginx terminates the SSL, it injects a security header / token and then I presume it passes this on to a back end, i presume that the nginx to application server is non SSL. You are having performance issue with the SSL + header inject part, which seems to be limiting you to approx 60req per sec before you hit 100% cpu.. This seems very very low to me looking at my prod setup - similar to yours I am seeing 600 connections and req/s ranging from 8-400 / sec. all whilst the cpu stay very very low. We try and use long lived TCP / SSL sessions, but we also use a thick client as well so have more control. Not sure about KEMP loadmaster. What I describe to you was our potential plans for when the load gets too much on the active/passive setup. It would allow you to take your 60 session ? and distributed it between 2 or upto 16 (I believe this is the max for pacemaker). an active / active setup The 2 node setup would be the same as yours router -> vlan with the 2 nodes > Node A would only process node a data and node B would only process node b data. This in theory would have the potential to double your req / sec. Alex On 3 March 2017 at 19:33, polder_trash wrote: > Alexsamad, > I might not have been clear, allow me to try again: > > * currently 2 NGINX revproxy nodes, 1 active the other on standby in case > node 1 fails. > * Since I am injecting an authentication header into the request, the HTTPS > request has to be offloaded at the node and introduces additional load > compared to injecting into non-encrypted requests. > * Current peak load ~60 concurrent requests, ~100% load on CPU. Concurrent > requests expected to more than double, so revproxy will be bottleneck. > > The NGINX revproxies run as a VM and I can ramp up the machine specs a > little bit, but I do not expect this to completely solve the issue here. > Therefore I am looking for some method of spreading the requests over > multiple backend revproxies, without the load balancer frontend having to > deal with SSL offloading. > > From the KEMP LoadMaster documentation I found that this technique is > called > SSL Passthrough. I am currently looking if that is also supported by NGINX. > > What do you think? Will this solve my issue? Am I on the wrong track? > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,272713,272729#msg-272729 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter_booth at me.com Mon Mar 6 01:13:56 2017 From: peter_booth at me.com (Peter Booth) Date: Sun, 05 Mar 2017 20:13:56 -0500 Subject: Balancing NGINX reverse proxy In-Reply-To: References: <25490575d14ffef383dbad6adc909775.NginxMailingListEnglish@forum.nginx.org> Message-ID: <54C325EE-5FBF-41AE-BA9A-D0D01AD03281@me.com> So I have a few different thoughts: 1. Yes nginx does support SSL pass through . You can configure nginx to stream your request to your SSL backend. I do this when I don't have control of the backend and it has to be SSL. I don't think that's your situation. 2. I suspect that there's something wrong with your SSL configuration and/or your nginx VMs are underpowered. Can you test the throughput that you are requesting http static resources ? Check with webpagetest.org that the expense is only being paid on the first request. 3. It's generally better to terminate SSL as early as possible and have the bulk of your communication be unencrypted. What spec are your VMs? Sent from my iPhone > On Mar 5, 2017, at 6:00 PM, Alex Samad wrote: > > Hi > > Firstly, I am fairly new to nginx. > > > From what I understand you have a standard sort of setup. > > > 2 nodes (vm's) with haproxy, allowing nginx to be active / passive. > > You have SSL requests which once nginx terminates the SSL, it injects a security header / token and then I presume it passes this on to a back end, i presume that the nginx to application server is non SSL. > > You are having performance issue with the SSL + header inject part, which seems to be limiting you to approx 60req per sec before you hit 100% cpu.. This seems very very low to me looking at my prod setup - similar to yours I am seeing 600 connections and req/s ranging from 8-400 / sec. all whilst the cpu stay very very low. > > We try and use long lived TCP / SSL sessions, but we also use a thick client as well so have more control. > > Not sure about KEMP loadmaster. > > What I describe to you was our potential plans for when the load gets too much on the active/passive setup. > > It would allow you to take your 60 session ? and distributed it between 2 or upto 16 (I believe this is the max for pacemaker). an active / active setup > > The 2 node setup would be the same as yours > > > router -> vlan with the 2 nodes > Node A would only process node a data and node B would only process node b data. This in theory would have the potential to double your req / sec. > > > > Alex > > >> On 3 March 2017 at 19:33, polder_trash wrote: >> Alexsamad, >> I might not have been clear, allow me to try again: >> >> * currently 2 NGINX revproxy nodes, 1 active the other on standby in case >> node 1 fails. >> * Since I am injecting an authentication header into the request, the HTTPS >> request has to be offloaded at the node and introduces additional load >> compared to injecting into non-encrypted requests. >> * Current peak load ~60 concurrent requests, ~100% load on CPU. Concurrent >> requests expected to more than double, so revproxy will be bottleneck. >> >> The NGINX revproxies run as a VM and I can ramp up the machine specs a >> little bit, but I do not expect this to completely solve the issue here. >> Therefore I am looking for some method of spreading the requests over >> multiple backend revproxies, without the load balancer frontend having to >> deal with SSL offloading. >> >> From the KEMP LoadMaster documentation I found that this technique is called >> SSL Passthrough. I am currently looking if that is also supported by NGINX. >> >> What do you think? Will this solve my issue? Am I on the wrong track? >> >> Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272713,272729#msg-272729 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Mar 6 02:58:47 2017 From: nginx-forum at forum.nginx.org (Nomad Worker) Date: Sun, 05 Mar 2017 21:58:47 -0500 Subject: ssl_session_timeout issues Message-ID: <97b02fc5f51caf3b9a323ee096f7b1c6.NginxMailingListEnglish@forum.nginx.org> I read the code of ssl module, the directive ssl_session_timeout seems only used for ssl session cache, not for ssl session ticket. the document describes the directive as 'Specifies a time during which a client may reuse the session parameters.' Is it not exactly? Is there any timeout for ssl session ticket ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272772,272772#msg-272772 From steve at greengecko.co.nz Mon Mar 6 03:11:01 2017 From: steve at greengecko.co.nz (steve) Date: Mon, 6 Mar 2017 16:11:01 +1300 Subject: Balancing NGINX reverse proxy In-Reply-To: <3a79824d76eaffe5907a5d8d7a0d450c.NginxMailingListEnglish@forum.nginx.org> References: <3a79824d76eaffe5907a5d8d7a0d450c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1cc3f5a0-a427-6649-6240-35dfe9312801@greengecko.co.nz> Hi, On 03/03/17 03:40, polder_trash wrote: > Hi, > > > I already tried adding both IP addresses to the DNS. But this, rather > predictably, only sent a handful of users to the secondary node. > This should not be the case ( well, for bind anyway ), as it should be delivering them in a round robin fashion. Alternatively ( bind again ) you can set up a split horizon DNS to deliver as a function of source ip address. Maybe the TTL hadn't expired on existing lookups, or the client is doing something strange? Steve -- Steve Holdoway BSc(Hons) MIITP https://www.greengecko.co.nz/ Linkedin: https://www.linkedin.com/in/steveholdoway Skype: sholdowa From sca at andreasschulze.de Mon Mar 6 11:39:56 2017 From: sca at andreasschulze.de (A. Schulze) Date: Mon, 06 Mar 2017 12:39:56 +0100 Subject: ssl_session_timeout issues In-Reply-To: <97b02fc5f51caf3b9a323ee096f7b1c6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170306123956.Horde.0PKdRhqnlp52QtyfWtYV0K_@andreasschulze.de> Nomad Worker: > I read the code of ssl module, the directive ssl_session_timeout seems only > used for ssl session cache, not for ssl session ticket. > the document describes the directive as 'Specifies a time during which a > client may reuse the session parameters.' Is it not exactly? > Is there any timeout for ssl session ticket ? or more general: is the usage of ssl session tickets suggested at all? these two links motivated me to set "ssl_session_tickets off" - https://www.farsightsecurity.com/Blog/20151202-thall-hardening-dh-and-ecc/ - https://timtaubert.de/blog/2014/11/the-sad-state-of-server-side-tls-session-resumption-implementations/ What are others opinions? Andreas From reallfqq-nginx at yahoo.fr Mon Mar 6 12:02:47 2017 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 6 Mar 2017 13:02:47 +0100 Subject: Nginx Map how to check value if empty In-Reply-To: <377e02fab51fe3a329b75a8079f30323.NginxMailingListEnglish@forum.nginx.org> References: <377e02fab51fe3a329b75a8079f30323.NginxMailingListEnglish@forum.nginx.org> Message-ID: Again, it is not empty, nor containing an hyphen... Look slowly to the log line and compare it to the log format. You use hyphens as separators, which, again, might not be a good idea at this precise location. ?The IP address you get is a private one though, so not 'client' but rather 'downstream'.? ?It seems you are hiding behind CloudFlare, you should read their doc to see how to get the real client IP address. The HTTP header you tried to use seems to be empty (ie 'dash'. As for using hyphens rather than real emptiness, I guess that helps validating there is no real value, ?differentating this case from a bogus 'empty' which would be a sign of a bug. --- *B. R.* On Sun, Mar 5, 2017 at 10:50 PM, c0nw0nk wrote: > Thank's for the info :) > > But why is $remote_addr outputting a hyphen instead of the users IP... > > I still expect to see the client's IP address. > > B.R. via nginx Wrote: > ------------------------------------------------------- > > That is because it is not: your eyes deceived you having a too quick > > look > > at the log line. > > > > Your 'empty' variables are actually showing the value '-' in this log > > line. > > It probably does not help debugging to have static '-' mixed in the > > format > > of your log lines where you put them. > > --- > > *B. R.* > > > > On Sun, Mar 5, 2017 at 10:00 PM, c0nw0nk > > wrote: > > > > > Francis Daly Wrote: > > > ------------------------------------------------------- > > > > On Fri, Mar 03, 2017 at 10:47:26AM -0500, c0nw0nk wrote: > > > > > > > > Hi there, > > > > > > > > > map $http_cf_connecting_ip $client_ip_from_cf { > > > > > default $http_cf_connecting_ip; > > > > > } > > > > > > > > > > How can I make it so if the client did not send that $http_ > > header > > > > it makes > > > > > $client_ip_from_cf variable value = $binary_remote_addr > > > > > > > > > > Not sure how to check in a map if that http header is present. > > > > > > > > If the http header is absent, the matching variable is empty. So > > it > > > > will > > > > have the value "". > > > > > > > > Use that as the first half of your "map" pair. > > > > > > > > f > > > > -- > > > > Francis Daly francis at daoine.org > > > > _______________________________________________ > > > > nginx mailing list > > > > nginx at nginx.org > > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > > Hey Francis, > > > > > > > > > HTTP BLOCK > > > log_format cf_custom 'CFIP:$http_cf_connecting_ip - $remote_addr - > > > $remote_user [$time_local] ' > > > '"$request" Status:$status $body_bytes_sent ' > > > '"$http_referer" "$http_user_agent"' > > > '$http_cf_ray'; > > > map $status $loggable { > > > ~^[23] 0; > > > default 1; > > > } > > > access_log logs/access.log cf_custom if=$loggable; > > > > > > map $remote_addr $client_ip_from_cf { > > > default $remote_addr; > > > } > > > > > > > > > Access.log output > > > CFIP:- - 10.108.22.40 - - [05/Mar/2017:21:27:50 +0100] "GET > > > > > > > > > Any idea why the remote_addr is empty surely it should display to me > > the > > > clients IP address i am not proxying traffic or anything like that. > > I > > > expected to see an IP there instead it seems the value is empty. > > > > > > Posted at Nginx Forum: https://forum.nginx.org/read.p > > > hp?2,272744,272766#msg-272766 > > > > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,272744,272768#msg-272768 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Mar 6 14:02:33 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 6 Mar 2017 17:02:33 +0300 Subject: ssl_session_timeout issues In-Reply-To: <97b02fc5f51caf3b9a323ee096f7b1c6.NginxMailingListEnglish@forum.nginx.org> References: <97b02fc5f51caf3b9a323ee096f7b1c6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170306140232.GC23126@mdounin.ru> Hello! On Sun, Mar 05, 2017 at 09:58:47PM -0500, Nomad Worker wrote: > I read the code of ssl module, the directive ssl_session_timeout seems only > used for ssl session cache, not for ssl session ticket. > the document describes the directive as 'Specifies a time during which a > client may reuse the session parameters.' Is it not exactly? > Is there any timeout for ssl session ticket ? The documentation is correct here, and your reading of the code is wrong or you are reading a very outdated version of the code. SSL_CTX_set_timeout() is always called, so ssl_session_timeout applies to all forms of session resumption, that is, both session cache and session tickets. See this commit for details: http://hg.nginx.org/nginx/rev/767aa37f12de -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Mon Mar 6 19:12:40 2017 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Mon, 06 Mar 2017 14:12:40 -0500 Subject: Nginx Map how to check value if empty In-Reply-To: References: Message-ID: So I figured out the problem is a bit of a dynamic one. My Nginx accepts some connections via cloudflare's proxy and other's via their DNS only and other connections go through a load balancing ip that sets a x-forwarded-for header containing the real IP, While others can avoid all of that and connect to a specific origin servers IP (remote_addr is the real IP for these connections). So to explain how to get the origin IP for each method someone could be using here is the list : Cloudflares proxied traffic : sets the header $http_cf_connecting_ip so use this header to get the Client's real IP Traffic from cloudflare via the DNS only connections : These would not have the $http_cf_connecting_ip header present. But those connections hit a load balancing ip what sets the header $http_x_forwarded_for header so that is the way to get the Clients real ip via those connections. And then some connections don't hit my load balancing IP and go directly to a specific origin server these connections can use $remote_addr. My Solution / conclusion : How to come up with a fix that allows me to obtain the real IP in a dynamic situation like this ? I have solved my issue with the following. map $http_x_forwarded_for $client_ip_x_forwarded_for { "" $remote_addr; #if this header missing set remote_addr as real ip default $http_x_forwarded_for; } map $http_cf_connecting_ip $client_ip_from_cf { "" $client_ip_x_forwarded_for; #if this header missing set x-forwarded-for as real ip default $http_cf_connecting_ip; } So if the cf_connecting_ip header from cloudflares proxied traffic is missing we fall back to the x_forwarded_for header to get the clients real ip. And if that x_forwarded_for header was also missing then we fall back to setting their IP as $remote_addr. Thank's for explaining my log output to me helped me realise what the root of the issue was :) and thanks to Francis for telling me that my map just needed "" as a if empty check. B.R. via nginx Wrote: ------------------------------------------------------- > Again, it is not empty, nor containing an hyphen... > Look slowly to the log line and compare it to the log format. You use > hyphens as separators, which, again, might not be a good idea at this > precise location. > > ?The IP address you get is a private one though, so not 'client' but > rather > 'downstream'.? > ?It seems you are hiding behind CloudFlare, you should read their doc > to > see how to get the real client IP address. The HTTP header you tried > to use > seems to be empty (ie 'dash'. > > As for using hyphens rather than real emptiness, I guess that helps > validating there is no real value, ?differentating this case from a > bogus > 'empty' which would be a sign of a bug. > --- > *B. R.* > > On Sun, Mar 5, 2017 at 10:50 PM, c0nw0nk > wrote: > > > Thank's for the info :) > > > > But why is $remote_addr outputting a hyphen instead of the users > IP... > > > > I still expect to see the client's IP address. > > > > B.R. via nginx Wrote: > > ------------------------------------------------------- > > > That is because it is not: your eyes deceived you having a too > quick > > > look > > > at the log line. > > > > > > Your 'empty' variables are actually showing the value '-' in this > log > > > line. > > > It probably does not help debugging to have static '-' mixed in > the > > > format > > > of your log lines where you put them. > > > --- > > > *B. R.* > > > > > > On Sun, Mar 5, 2017 at 10:00 PM, c0nw0nk > > > > wrote: > > > > > > > Francis Daly Wrote: > > > > ------------------------------------------------------- > > > > > On Fri, Mar 03, 2017 at 10:47:26AM -0500, c0nw0nk wrote: > > > > > > > > > > Hi there, > > > > > > > > > > > map $http_cf_connecting_ip $client_ip_from_cf { > > > > > > default $http_cf_connecting_ip; > > > > > > } > > > > > > > > > > > > How can I make it so if the client did not send that $http_ > > > header > > > > > it makes > > > > > > $client_ip_from_cf variable value = $binary_remote_addr > > > > > > > > > > > > Not sure how to check in a map if that http header is > present. > > > > > > > > > > If the http header is absent, the matching variable is empty. > So > > > it > > > > > will > > > > > have the value "". > > > > > > > > > > Use that as the first half of your "map" pair. > > > > > > > > > > f > > > > > -- > > > > > Francis Daly francis at daoine.org > > > > > _______________________________________________ > > > > > nginx mailing list > > > > > nginx at nginx.org > > > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > > > > > Hey Francis, > > > > > > > > > > > > HTTP BLOCK > > > > log_format cf_custom 'CFIP:$http_cf_connecting_ip - $remote_addr > - > > > > $remote_user [$time_local] ' > > > > '"$request" Status:$status $body_bytes_sent ' > > > > '"$http_referer" "$http_user_agent"' > > > > '$http_cf_ray'; > > > > map $status $loggable { > > > > ~^[23] 0; > > > > default 1; > > > > } > > > > access_log logs/access.log cf_custom if=$loggable; > > > > > > > > map $remote_addr $client_ip_from_cf { > > > > default $remote_addr; > > > > } > > > > > > > > > > > > Access.log output > > > > CFIP:- - 10.108.22.40 - - [05/Mar/2017:21:27:50 +0100] "GET > > > > > > > > > > > > Any idea why the remote_addr is empty surely it should display > to me > > > the > > > > clients IP address i am not proxying traffic or anything like > that. > > > I > > > > expected to see an IP there instead it seems the value is empty. > > > > > > > > Posted at Nginx Forum: https://forum.nginx.org/read.p > > > > hp?2,272744,272766#msg-272766 > > > > > > > > _______________________________________________ > > > > nginx mailing list > > > > nginx at nginx.org > > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > Posted at Nginx Forum: https://forum.nginx.org/read. > > php?2,272744,272768#msg-272768 > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272744,272779#msg-272779 From mikydevel at yahoo.fr Mon Mar 6 21:35:03 2017 From: mikydevel at yahoo.fr (Mik J) Date: Mon, 6 Mar 2017 21:35:03 +0000 (UTC) Subject: Reverse proxy problem with an application References: <1797319055.5405349.1488836103023.ref@mail.yahoo.com> Message-ID: <1797319055.5405349.1488836103023@mail.yahoo.com> Hello, I have run an application behind a nginx reverse proxy and I can't make it to work a) if I access this application using https://1.1.1.1:443 it works (certificate warning)b) if I access this application using https://myapp.mydomain.org, I get access to the login page??? location ^~ / { ??????? proxy_pass??????? https://1.1.1.1:443; ??????? proxy_redirect??? off; ??????? proxy_set_header? Host???????????? $http_host; ??????? proxy_set_header? X-Real-IP??????? $remote_addr; ??????? proxy_set_header? X-Forwarded-For? $proxy_add_x_forwarded_for; ??????? proxy_hide_header X-Frame-Options;??????? proxy_hide_header X-Content-Security-Policy; ??????? proxy_hide_header X-Content-Type-Options; ??????? proxy_hide_header X-WebKit-CSP; ??????? proxy_hide_header content-security-policy; ??????? proxy_hide_header x-xss-protection; ??????? proxy_set_header? X-NginX-Proxy true; ??????? proxy_ssl_session_reuse off; ??? } c) I log in in the page and after some time (2/3 seconds) the application logs me out When I log in directly case a) I notice that I have (firebug) CookieSaveStateCookie=root; APPSESSIONID=070ABC6AE433D2CAEDCFFB1E43074416; testcookieenabled Whereas when I log in in case c) I haveAPPSESSIONID=070ABC6AE433D2CAEDCFFB1E43074416; testcookieenabled So I feel there's a problem with the session or something like that.PS: There is only one backend server and I can't run plain http (disable https) Does anyone has an idea ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjlp at sina.com Tue Mar 7 01:37:54 2017 From: tjlp at sina.com (tjlp at sina.com) Date: Tue, 07 Mar 2017 09:37:54 +0800 Subject: =?UTF-8?B?5Zue5aSN77yaUmU6X+WbnuWkje+8mlJlOl/lm57lpI3vvJpSZTpfSXNzdWVfYWJv?= =?UTF-8?B?dXRfbmdpbnhfcmVtb3ZpbmdfdGhlX2hlYWRlcl8iQ29ubmVjdGlvbiJfaW5f?= =?UTF-8?B?SFRUUF9yZXNwb25zZT8=?= Message-ID: <20170307013754.66B16B000D0@webmail.sinamail.sina.com.cn> Hi, Alexks, I try your proposal and it doesn't work. Actually my issue is the same as this one http://stackoverflow.com/questions/5100971/nginx-and-proxy-pass-send-connection-close-headers. 1. I add "keeplive_request 0". The result is that the "Connection: close" header is sent to client for every response. That does not match my requirement. Our application decides whether to finish the application session using this header. 2. I add "proxy_pass_header Connection". Nginx keeps sending "Connection: keep-alive" header to client even the header is "Connection: close" from upstream server. Seems Nginx has some special handling for the Connection header in response. The openresty author suggests that the only way for changing response header change the nginx C code for this issue. See this issue: https://github.com/openresty/headers-more-nginx-module/issues/22#issuecomment-31585052. Thanks Liu Peng ----- ???? ----- ????Aleksandar Lazic ????tjlp at sina.com ????nginx ???Re:_???Re:_???Re:_Issue_about_nginx_removing_the_header_"Connection"_in_HTTP_response? ???2017?03?04? 17?22? Hi Liu Peng. Am 04-03-2017 09:12, schrieb tjlp at sina.com: > > Hi, Alexks, > > I don't want to hide the header. > My problem is that Nginx change the "Connection: close" header in the > reponse from upstream server to "Connction: keep-alive" and send to > client. I want to keep the original "Connection: close" header. Ah that's a clear question. It took us only 3 rounds to get to this clear question ;-) So now the standard Questions from me: What's the output of nginx -V ? What's your config? Maybe you have set 'keepalive' in the upstream config http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive or 'proxy_http_version 1.1;' http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_http_version as a last resort you can just pass the header with 'proxy_pass_header Connection;'. http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass_header Choose the solution which fit's to your demand. I can only guess due to the fact that we don't know your config. May I ask you to take a look into this document, which exists in several languages, thank you very much. http://www.catb.org/~esr/faqs/smart-questions.html Best regards Aleks > Thanks > Liu Peng > > ----- ???? ----- > ????Aleksandar Lazic > ????tjlp at sina.com > ????nginx > ???Re:_???Re:_Issue_about_nginx_removing_the_header_"Connection"_in_HTTP_response? > ???2017?03?03? 16?19? > Hi. > > then one directive upward. > > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_hide_header > > Cheers > > aleks > > Am 03-03-2017 06:00, schrieb tjlp at sina.com: > >> Hi, >> >> What I mention is the header in response from backend server. Your >> answer about proxy_set_header is the "Connection" header in request. >> >> Thanks >> Liu Peng >> >> ----- ???? ----- >> ????Aleksandar Lazic >> ????nginx at nginx.org >> ????tjlp at sina.com >> ???Re: Issue about nginx removing the header "Connection" in HTTP >> response? >> ???2017?03?03? 06?25? >> >> Hi. >> Am 01-03-2017 08:29, schrieb tjlp at sina.com: >>> Hi, nginx guy, >>> >>> In our system, for some special requests, the upstream server will >>> return a response which the header includes "Connection: Close". >>> According to HTTP protocol, "Connection" is one-hop header. >>> So, nginx will remove this header and the client can't do the >>> business >>> logic correctly. >>> >>> How to handle this scenario? >> you mean something like this? >> http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_set_header >> If the value of a header field is an empty string then this field will >> not be passed to a proxied server: >> proxy_set_header Connection ""; >>> Thanks >>> Liu Peng >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Mar 7 05:49:51 2017 From: nginx-forum at forum.nginx.org (Nomad Worker) Date: Tue, 07 Mar 2017 00:49:51 -0500 Subject: ssl_session_timeout issues In-Reply-To: <20170306140232.GC23126@mdounin.ru> References: <20170306140232.GC23126@mdounin.ru> Message-ID: <3bb27db6be5b5857d19739d79297d9bd.NginxMailingListEnglish@forum.nginx.org> Thank you Maxim, my reading is wrong, this line of code is in function ngx_ssl_session_cache, and I ignored it. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272772,272783#msg-272783 From al-nginx at none.at Tue Mar 7 07:39:41 2017 From: al-nginx at none.at (Aleksandar Lazic) Date: Tue, 07 Mar 2017 08:39:41 +0100 Subject: =?UTF-8?B?UmU6IOWbnuWkje+8mlJlOl/lm57lpI3vvJpSZTpf5Zue5aSN77yaUmU6X0lzc3Vl?= =?UTF-8?B?X2Fib3V0X25naW54X3JlbW92aW5nX3RoZV9oZWFkZXJfIkNvbm5lY3Rpb24i?= =?UTF-8?B?X2luX0hUVFBfcmVzcG9uc2U/?= In-Reply-To: <20170307013754.66B16B000D0@webmail.sinamail.sina.com.cn> References: <20170307013754.66B16B000D0@webmail.sinamail.sina.com.cn> Message-ID: <6f6180c753696973e64fc30b97fba968@none.at> Hi Liu Peng. We still don't know your nginx version nor your config! Cite from below: > So now the standard Questions from me: > What's the output of nginx -V ? > What's your config? regards aleks Am 07-03-2017 02:37, schrieb tjlp at sina.com: > Hi, Alexks, > > I try your proposal and it doesn't work. Actually my issue is the same > as this one > http://stackoverflow.com/questions/5100971/nginx-and-proxy-pass-send-connection-close-headers. > > 1. I add "keeplive_request 0". The result is that the "Connection: > close" header is sent to client for every response. That does not match > my requirement. Our application decides whether to finish the > application session using this header. > > 2. I add "proxy_pass_header Connection". Nginx keeps sending > "Connection: keep-alive" header to client even the header is > "Connection: close" from upstream server. > > Seems Nginx has some special handling for the Connection header in > response. The openresty author suggests that the only way for changing > response header change the nginx C code for this issue. See this issue: > https://github.com/openresty/headers-more-nginx-module/issues/22#issuecomment-31585052. > > Thanks > Liu Peng > > ----- ???? ----- > ????Aleksandar Lazic > ????tjlp at sina.com > ????nginx > ???Re:_???Re:_???Re:_Issue_about_nginx_removing_the_header_"Connection"_in_HTTP_response? > ???2017?03?04? 17?22? > > Hi Liu Peng. > Am 04-03-2017 09:12, schrieb tjlp at sina.com: >> >> Hi, Alexks, >> >> I don't want to hide the header. >> My problem is that Nginx change the "Connection: close" header in the >> reponse from upstream server to "Connction: keep-alive" and send to >> client. I want to keep the original "Connection: close" header. > Ah that's a clear question. > It took us only 3 rounds to get to this clear question ;-) > So now the standard Questions from me: > What's the output of nginx -V ? > What's your config? > Maybe you have set 'keepalive' in the upstream config > http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive > or > 'proxy_http_version 1.1;' > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_http_version > as a last resort you can just pass the header with > 'proxy_pass_header Connection;'. > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass_header > Choose the solution which fit's to your demand. > I can only guess due to the fact that we don't know your config. > May I ask you to take a look into this document, which exists in > several > languages, thank you very much. > http://www.catb.org/~esr/faqs/smart-questions.html > Best regards > Aleks >> Thanks >> Liu Peng >> >> ----- ???? ----- >> ????Aleksandar Lazic >> ????tjlp at sina.com >> ????nginx >> ???Re:_???Re:_Issue_about_nginx_removing_the_header_"Connection"_in_HTTP_response? >> ???2017?03?03? 16?19? >> Hi. >> >> then one directive upward. >> >> http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_hide_header >> >> Cheers >> >> aleks >> >> Am 03-03-2017 06:00, schrieb tjlp at sina.com: >> >>> Hi, >>> >>> What I mention is the header in response from backend server. Your >>> answer about proxy_set_header is the "Connection" header in request. >>> >>> Thanks >>> Liu Peng >>> >>> ----- ???? ----- >>> ????Aleksandar Lazic >>> ????nginx at nginx.org >>> ????tjlp at sina.com >>> ???Re: Issue about nginx removing the header "Connection" in HTTP >>> response? >>> ???2017?03?03? 06?25? >>> >>> Hi. >>> Am 01-03-2017 08:29, schrieb tjlp at sina.com: >>>> Hi, nginx guy, >>>> >>>> In our system, for some special requests, the upstream server will >>>> return a response which the header includes "Connection: Close". >>>> According to HTTP protocol, "Connection" is one-hop header. >>>> So, nginx will remove this header and the client can't do the >>>> business >>>> logic correctly. >>>> >>>> How to handle this scenario? >>> you mean something like this? >>> http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_set_header >>> If the value of a header field is an empty string then this field >>> will >>> not be passed to a proxied server: >>> proxy_set_header Connection ""; >>>> Thanks >>>> Liu Peng >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx From tjlp at sina.com Tue Mar 7 09:49:10 2017 From: tjlp at sina.com (tjlp at sina.com) Date: Tue, 07 Mar 2017 17:49:10 +0800 Subject: =?UTF-8?B?5Zue5aSN77yaUmU6X+WbnuWkje+8mlJlOl/lm57lpI3vvJpSZTpf5Zue5aSN77ya?= =?UTF-8?B?UmU6X0lzc3VlX2Fib3V0X25naW54X3JlbW92aW5nX3RoZV9oZWFkZXJfIkNv?= =?UTF-8?B?bm5lY3Rpb24iX2luX0hUVFBfcmVzcG9uc2U/?= Message-ID: <20170307094910.43F30B000D0@webmail.sinamail.sina.com.cn> Hi, Aleks, The result of nginx -V is as follow: nginx version: nginx/1.11.1 built by gcc 4.9.2 (Debian 4.9.2-10) built with OpenSSL 1.0.1t 3 May 2016 TLS SNI support enabled configure arguments: --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gzip_static_module --with-http_sub_module --with-http_v2_module --with-http_spdy_module --with-stream --with-stream_ssl_module --with-threads --with-file-aio --without-mail_pop3_module --without-mail_smtp_module --without-mail_imap_module --without-http_uwsgi_module --without-http_scgi_module --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --add-module=/tmp/build/ngx_devel_kit-0.3.0 --add-module=/tmp/build/set-misc-nginx-module-0.30 --add-module=/tmp/build/nginx-module-vts-0.1.9 --add-module=/tmp/build/lua-nginx-module-0.10.5 --add-module=/tmp/build/headers-more-nginx-module-0.30 --add-module=/tmp/build/nginx-goodies-nginx-sticky-module-ng-c78b7dd79d0d --add-module=/tmp/build/nginx-http-auth-digest-f85f5d6fdcc06002ff879f5cbce930999c287011 --add-module=/tmp/build/ngx_http_substitutions_filter_module-bc58cb11844bc42735bbaef7085ea86ace46d05b --add-module=/tmp/build/lua-upstream-nginx-module-0.05 The nginx conf is: daemon off; worker_processes 2; pid /run/nginx.pid; worker_rlimit_nofile 131072; pcre_jit on; events { multi_accept on; worker_connections 16384; use epoll; } http { lua_shared_dict server_sessioncnt_dict 20k; lua_shared_dict server_dict 20k; lua_shared_dict server_acceptnewconn_dict 20k; lua_shared_dict sessionid_server_dict 100k; real_ip_header X-Forwarded-For; set_real_ip_from 0.0.0.0/0; real_ip_recursive on; geoip_country /etc/nginx/GeoIP.dat; geoip_city /etc/nginx/GeoLiteCity.dat; geoip_proxy_recursive on; vhost_traffic_status_zone shared:vhost_traffic_status:10m; vhost_traffic_status_filter_by_set_key $geoip_country_code country::*; # lua section to return proper error codes when custom pages are used lua_package_path '.?.lua;./etc/nginx/lua/?.lua;/etc/nginx/lua/vendor/lua-resty-http/lib/?.lua;/etc/nginx/lua/vendor/lua-resty-lrucache/lib/?.lua;/etc/nginx/lua/vendor/lua-resty-core/lib/?.lua;/etc/nginx/lua/vendor/lua-resty-balancer/lib/?.lua;'; init_by_lua_file /etc/nginx/lua/init_by_lua.lua; sendfile on; aio threads; tcp_nopush on; tcp_nodelay on; log_subrequest on; reset_timedout_connection on; keepalive_timeout 75s; types_hash_max_size 2048; server_names_hash_max_size 512; server_names_hash_bucket_size 64; include /etc/nginx/mime.types; default_type text/html; gzip on; gzip_comp_level 5; gzip_http_version 1.1; gzip_min_length 256; gzip_types application/atom+xml application/javascript aplication/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component; gzip_proxied any; client_max_body_size "64m"; log_format upstreaminfo '$remote_addr - ' '[$proxy_add_x_forwarded_for] - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" ' '$request_length $request_time $upstream_addr $upstream_response_length $upstream_response_time $upstream_status'; map $request $loggable { default 1; } access_log /var/log/nginx/access.log upstreaminfo if=$loggable; error_log /var/log/nginx/error.log notice; map $http_upgrade $connection_upgrade { default upgrade; '' close; } # trust http_x_forwarded_proto headers correctly indicate ssl offloading map $http_x_forwarded_proto $pass_access_scheme { default $http_x_forwarded_proto; '' $scheme; } # Map a response error watching the header Content-Type map $http_accept $httpAccept { default html; application/json json; application/xml xml; text/plain text; } map $httpAccept $httpReturnType { default text/html; json application/json; xml application/xml; text text/plain; } server_name_in_redirect off; port_in_redirect off; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # turn on session caching to drastically improve performance ssl_session_cache builtin:1000 shared:SSL:10m; ssl_session_timeout 10m; # allow configuring ssl session tickets ssl_session_tickets on; # slightly reduce the time-to-first-byte ssl_buffer_size 4k; # allow configuring custom ssl ciphers ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA'; ssl_prefer_server_ciphers on; # In case of errors try the next upstream server before returning an error proxy_next_upstream error timeout invalid_header http_502 http_503 http_504; upstream liupeng-sm-rte-svc-13080 { server 172.77.69.10:13080; server 172.77.87.9:13080; balancer_by_lua_file /etc/nginx/lua/balancer_by_lua.lua; } server { server_name _; listen 80; listen 443 ssl spdy http2; # PEM sha: aad58c371e57f3c243a7c8143c17762c67a0f18a ssl_certificate /etc/nginx-ssl/system-snake-oil-certificate.pem; ssl_certificate_key /etc/nginx-ssl/system-snake-oil-certificate.pem; more_set_headers "Strict-Transport-Security: max-age=15724800; includeSubDomains; preload"; vhost_traffic_status_filter_by_set_key $geoip_country_code country::$server_name; location /SM/ui { proxy_set_header Host $host; # Pass Real IP proxy_set_header X-Real-IP $remote_addr; # Allow websocket connections proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection ""; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Port $server_port; proxy_set_header X-Forwarded-Proto $pass_access_scheme; # mitigate HTTPoxy Vulnerability # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/ proxy_set_header Proxy ""; proxy_connect_timeout 5s; proxy_send_timeout 60s; proxy_read_timeout 60s; proxy_redirect off; proxy_buffering off; proxy_http_version 1.1; proxy_pass http://liupeng-sm-rte-svc-13080; rewrite_by_lua_file /etc/nginx/lua/rewrite_by_lua.lua; header_filter_by_lua_file /etc/nginx/lua/header_filter_by_lua.lua; } } } ----- ???? ----- ????Aleksandar Lazic ????tjlp at sina.com ????nginx ???Re:_???Re:_???Re:_???Re:_Issue_about_nginx_removing_the_header_"Connection"_in_HTTP_response? ???2017?03?07? 15?39? Hi Liu Peng. We still don't know your nginx version nor your config! Cite from below: > So now the standard Questions from me: > What's the output of nginx -V ? > What's your config? regards aleks Am 07-03-2017 02:37, schrieb tjlp at sina.com: > Hi, Alexks, > > I try your proposal and it doesn't work. Actually my issue is the same > as this one > http://stackoverflow.com/questions/5100971/nginx-and-proxy-pass-send-connection-close-headers. > > 1. I add "keeplive_request 0". The result is that the "Connection: > close" header is sent to client for every response. That does not match > my requirement. Our application decides whether to finish the > application session using this header. > > 2. I add "proxy_pass_header Connection". Nginx keeps sending > "Connection: keep-alive" header to client even the header is > "Connection: close" from upstream server. > > Seems Nginx has some special handling for the Connection header in > response. The openresty author suggests that the only way for changing > response header change the nginx C code for this issue. See this issue: > https://github.com/openresty/headers-more-nginx-module/issues/22#issuecomment-31585052. > > Thanks > Liu Peng > > ----- ???? ----- > ????Aleksandar Lazic > ????tjlp at sina.com > ????nginx > ???Re:_???Re:_???Re:_Issue_about_nginx_removing_the_header_"Connection"_in_HTTP_response? > ???2017?03?04? 17?22? > > Hi Liu Peng. > Am 04-03-2017 09:12, schrieb tjlp at sina.com: >> >> Hi, Alexks, >> >> I don't want to hide the header. >> My problem is that Nginx change the "Connection: close" header in the >> reponse from upstream server to "Connction: keep-alive" and send to >> client. I want to keep the original "Connection: close" header. > Ah that's a clear question. > It took us only 3 rounds to get to this clear question ;-) > So now the standard Questions from me: > What's the output of nginx -V ? > What's your config? > Maybe you have set 'keepalive' in the upstream config > http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive > or > 'proxy_http_version 1.1;' > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_http_version > as a last resort you can just pass the header with > 'proxy_pass_header Connection;'. > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass_header > Choose the solution which fit's to your demand. > I can only guess due to the fact that we don't know your config. > May I ask you to take a look into this document, which exists in > several > languages, thank you very much. > http://www.catb.org/~esr/faqs/smart-questions.html > Best regards > Aleks >> Thanks >> Liu Peng >> >> ----- ???? ----- >> ????Aleksandar Lazic >> ????tjlp at sina.com >> ????nginx >> ???Re:_???Re:_Issue_about_nginx_removing_the_header_"Connection"_in_HTTP_response? >> ???2017?03?03? 16?19? >> Hi. >> >> then one directive upward. >> >> http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_hide_header >> >> Cheers >> >> aleks >> >> Am 03-03-2017 06:00, schrieb tjlp at sina.com: >> >>> Hi, >>> >>> What I mention is the header in response from backend server. Your >>> answer about proxy_set_header is the "Connection" header in request. >>> >>> Thanks >>> Liu Peng >>> >>> ----- ???? ----- >>> ????Aleksandar Lazic >>> ????nginx at nginx.org >>> ????tjlp at sina.com >>> ???Re: Issue about nginx removing the header "Connection" in HTTP >>> response? >>> ???2017?03?03? 06?25? >>> >>> Hi. >>> Am 01-03-2017 08:29, schrieb tjlp at sina.com: >>>> Hi, nginx guy, >>>> >>>> In our system, for some special requests, the upstream server will >>>> return a response which the header includes "Connection: Close". >>>> According to HTTP protocol, "Connection" is one-hop header. >>>> So, nginx will remove this header and the client can't do the >>>> business >>>> logic correctly. >>>> >>>> How to handle this scenario? >>> you mean something like this? >>> http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_set_header >>> If the value of a header field is an empty string then this field >>> will >>> not be passed to a proxied server: >>> proxy_set_header Connection ""; >>>> Thanks >>>> Liu Peng >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Mar 7 12:33:58 2017 From: nginx-forum at forum.nginx.org (zaidahmd) Date: Tue, 07 Mar 2017 07:33:58 -0500 Subject: Can NGINX Forward the 401 Response to Upstream server to Destroy Temp User data Message-ID: <8234620e528b80f80c85ec93453e1761.NginxMailingListEnglish@forum.nginx.org> I have and NGINX reverse proxy and upstream server. NGINX authenticates the incoming request and forwards the request to upstream server, which also authenticates the request first and then creates a session for the user. I want to know if the user session gets expired in NGINX, will NGINX forward the request to upstream server to also destroy the user session OR NGINX will just destroy the session in its authentication service and will not inform the upstream server to destroy the session ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272796,272796#msg-272796 From nginx-forum at forum.nginx.org Tue Mar 7 13:18:02 2017 From: nginx-forum at forum.nginx.org (alweiss) Date: Tue, 07 Mar 2017 08:18:02 -0500 Subject: Efficient CRL checking at Nginx In-Reply-To: <23ac810b9b4d0f3f03fc8ffa9743aca3.NginxMailingListEnglish@forum.nginx.org> References: <20141217154720.GQ45960@mdounin.ru> <23ac810b9b4d0f3f03fc8ffa9743aca3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8948412cdda62d1392ba30ee36c4aae7.NginxMailingListEnglish@forum.nginx.org> Hi Maxim For specific needs, if i don't add the ssl_crl directive to my ssl configuration, would nginx just don't check anything or would it issue a live query on the url indicated as a crl distribution point in the client certificate, introducing high latency ...? In other words, how to completely disable.crl checking on client authentication ? Thanks ! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,255509,272798#msg-272798 From mdounin at mdounin.ru Tue Mar 7 13:36:18 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 7 Mar 2017 16:36:18 +0300 Subject: Efficient CRL checking at Nginx In-Reply-To: <8948412cdda62d1392ba30ee36c4aae7.NginxMailingListEnglish@forum.nginx.org> References: <20141217154720.GQ45960@mdounin.ru> <23ac810b9b4d0f3f03fc8ffa9743aca3.NginxMailingListEnglish@forum.nginx.org> <8948412cdda62d1392ba30ee36c4aae7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170307133617.GI23126@mdounin.ru> Hello! On Tue, Mar 07, 2017 at 08:18:02AM -0500, alweiss wrote: > Hi Maxim > For specific needs, if i don't add the ssl_crl directive to my ssl > configuration, would nginx just don't check anything or would it issue a > live query on the url indicated as a crl distribution point in the client > certificate, introducing high latency ...? > > In other words, how to completely disable.crl checking on client > authentication ? CRL checking is only enabled when you explicitly load a CRL using the ssl_crl directive. That is, CRL checking is disabled by default, you don't need to do anything to disable it. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Tue Mar 7 14:01:46 2017 From: nginx-forum at forum.nginx.org (alweiss) Date: Tue, 07 Mar 2017 09:01:46 -0500 Subject: Efficient CRL checking at Nginx In-Reply-To: <20170307133617.GI23126@mdounin.ru> References: <20170307133617.GI23126@mdounin.ru> Message-ID: Understood. Thanks much for your quick reply ! Alex Posted at Nginx Forum: https://forum.nginx.org/read.php?2,255509,272800#msg-272800 From nginx-forum at forum.nginx.org Tue Mar 7 19:50:57 2017 From: nginx-forum at forum.nginx.org (larsg) Date: Tue, 07 Mar 2017 14:50:57 -0500 Subject: Reverse Proxy with 500k connections Message-ID: Hi, we are operating native nginx 1.8.1 on RHEL as a reverse proxy. The nginx routes requests to a backend server that can be reached from the proxy via a single internal IP address. We have to support a large number of concurrent websocket connections - say 100k to 500k. As we don't want to increase the number of proxy instances (with different IPs) and we cannot use the "proxy_bind transarent" option (was introduced in a later nginx release, upgrade is not possible) we wanted to configure the nginx to use different source IPs then routing to the backend. Thus, we want nginx to select an available source ip + source port when a connection is established with the backend. For that we assigned ten internal IPs to the proxy server and used the proxy_bind directive bound to 0.0.0.0. But this approach seems not to work. The nginx instance seems always use the first IP as source IP. Using multiple proxy_bind's is not possible. So my question is: How can I configure nginx to select from a pool of source IPs? Or generally: to overcome the 64k problem? Best Regards Lars ------- extract from config upstream backend { server 192.168.1.21:443; } server { listen 443 ssl; proxy_bind 0.0.0.0; location /service { proxy_pass https://backend; ... } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272808,272808#msg-272808 From nelsonmarcos at gmail.com Tue Mar 7 21:12:24 2017 From: nelsonmarcos at gmail.com (Nelson Marcos) Date: Tue, 7 Mar 2017 18:12:24 -0300 Subject: Reverse Proxy with 500k connections In-Reply-To: References: Message-ID: Do you really need to use different source ips or it's a solution that you picked? Also, is it a option to set the keepalive option in your upstream configure section? http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive Um abra?o, NM 2017-03-07 16:50 GMT-03:00 larsg : > Hi, > > we are operating native nginx 1.8.1 on RHEL as a reverse proxy. > The nginx routes requests to a backend server that can be reached from the > proxy via a single internal IP address. > We have to support a large number of concurrent websocket connections - say > 100k to 500k. > > As we don't want to increase the number of proxy instances (with different > IPs) and we cannot use the "proxy_bind transarent" option (was introduced > in > a later nginx release, upgrade is not possible) we wanted to configure the > nginx to use different source IPs then routing to the backend. Thus, we > want > nginx to select an available source ip + source port when a connection is > established with the backend. > > For that we assigned ten internal IPs to the proxy server and used the > proxy_bind directive bound to 0.0.0.0. > But this approach seems not to work. The nginx instance seems always use > the > first IP as source IP. > Using multiple proxy_bind's is not possible. > > So my question is: How can I configure nginx to select from a pool of > source > IPs? Or generally: to overcome the 64k problem? > > Best Regards > Lars > > ------- extract from config > > upstream backend { > server 192.168.1.21:443; > } > > server { > listen 443 ssl; > proxy_bind 0.0.0.0; > > location /service { > proxy_pass https://backend; > ... > } > } > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,272808,272808#msg-272808 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From efeldhusen.lists at gmail.com Tue Mar 7 21:21:56 2017 From: efeldhusen.lists at gmail.com (Eric Feldhusen) Date: Tue, 7 Mar 2017 16:21:56 -0500 Subject: Nginx reverse proxy for TFTP UDP port 69 traffic Message-ID: <04D23F0F-B586-42A5-8C9E-AFA77F5ECEBB@gmail.com> I?m trying to use Nginx to reverse proxy TFTP UDP port 69 traffic and I?m having a problem with getting files through the nginx reverse proxy. My configuration is simple, I?m running TFTP on one Centos 6.x server and the Nginx reserve proxy on another Centos 6.x server with the latest Nginx mainline 1.11.10 from the nginx.org repository. TFTP connections to the TFTP server directly work. Using the same commands through the Nginx reverse proxy, connects, but will not download or upload a file through it. If you have any suggestions, I?d appreciate a nudge in the right direction. I?m assuming it?s something I?m missing. Eric Feldhusen My configuration is below. The TFTP server is at 192.168.1.11 and the Nginx reverse proxy is at 192.168.1.145. No firewalls on either server. stream { upstream staging_tftp_servers { server 192.168.1.70:69; } server { listen 69 udp; #udp proxy_pass staging_tftp_servers; error_log /var/log/nginx/tftp.log info; } } I?m seeing these in the tftp.log 2017/03/06 14:34:44 [info] 32676#32676: *554 udp upstream disconnected, bytes from/to client:36/0, bytes from/to upstream:0/36 2017/03/06 14:34:46 [info] 32676#32676: *556 udp upstream disconnected, bytes from/to client:36/0, bytes from/to upstream:0/36 2017/03/06 14:34:47 [info] 32676#32676: *1439 udp client 10.1.0.14:2277 connected to 0.0.0.0:69 2017/03/06 14:34:47 [info] 32676#32676: *1439 udp proxy 192.168.1.145:37961 connected to 192.168.1.11:69 2017/03/06 14:34:48 [info] 32676#32676: *558 udp upstream disconnected, bytes from/to client:23/0, bytes from/to upstream:0/23 2017/03/06 14:34:48 [info] 32676#32676: *560 udp upstream disconnected, bytes from/to client:36/0, bytes from/to upstream:0/36 2017/03/06 14:34:49 [info] 32676#32676: *1441 udp client 10.1.0.15:1090 connected to 0.0.0.0:69 2017/03/06 14:34:49 [info] 32676#32676: *1441 udp proxy 192.168.1.145:38526 connected to 192.168.1.11:69 2017/03/06 14:34:50 [info] 32676#32676: *562 udp upstream disconnected, bytes from/to client:36/0, bytes from/to upstream:0/36 2017/03/06 14:34:53 [info] 32676#32676: *1443 udp client 10.1.0.14:2277 connected to 0.0.0.0:69 2017/03/06 14:34:53 [info] 32676#32676: *1443 udp proxy 192.168.1.145:38689 connected to 192.168.1.11:69 2017/03/06 14:34:56 [info] 32676#32676: *564 udp upstream disconnected, bytes from/to client:23/0, bytes from/to upstream:0/23 2017/03/06 14:34:56 [info] 32676#32676: *566 udp upstream disconnected, bytes from/to client:36/0, bytes from/to upstream:0/36 -------------- next part -------------- An HTML attachment was scrubbed... URL: From rainer at ultra-secure.de Tue Mar 7 21:21:48 2017 From: rainer at ultra-secure.de (Rainer Duffner) Date: Tue, 7 Mar 2017 22:21:48 +0100 Subject: Reverse Proxy with 500k connections In-Reply-To: References: Message-ID: <14E44683-8D28-47A2-B371-7167A5F72A12@ultra-secure.de> > Am 07.03.2017 um 22:12 schrieb Nelson Marcos : > > Do you really need to use different source ips or it's a solution that you picked? > > Also, is it a option to set the keepalive option in your upstream configure section? > http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive I?m not sure if you can proxy web socket connections like http-connections. After all, they are persistent (hence the large number of connections). Why can?t you (OP) do the upgrade to 1.10? I thought it?s the only ?supported" version anyway? -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Mar 7 21:23:59 2017 From: francis at daoine.org (Francis Daly) Date: Tue, 7 Mar 2017 21:23:59 +0000 Subject: Nginx Map how to check value if empty In-Reply-To: References: Message-ID: <20170307212359.GE15209@daoine.org> On Mon, Mar 06, 2017 at 02:12:40PM -0500, c0nw0nk wrote: Hi there, good that you've found some more answers. There's still some to be worked on, though, I suspect. > So to explain how to get the origin IP for each method someone could be > using here is the list : > > Cloudflares proxied traffic : > sets the header $http_cf_connecting_ip so use this header to get the > Client's real IP Stock nginx has the realip module which will allow you to use a value from one specific http header, as if it were the connecting address. And stock nginx knows that the client can set any header to any value, so it can be configured to only believe the value if it was set by a trusted source. (More or less). It looks like this $http_cf_connecting_ip contains a single IP address, which is the address of the thing that connected to Cloudflare -- either the client, or a proxy that it uses. And it can be trusted, if the incoming request went through the Cloudflare reverse proxy. (And, presumably, it is spoofed if the incoming request did not go through the Cloudflare reverse proxy.) > Traffic from cloudflare via the DNS only connections : > These would not have the $http_cf_connecting_ip header present. > But those connections hit a load balancing ip what sets the header > $http_x_forwarded_for header so that is the way to get the Clients real ip > via those connections. $http_x_forwarded_for is common enough; it can hold a list of IP addresses. The realip module knows how to deal with it. Whatever method you use to read it, you should be aware that the header is not necessarily exactly one IP address. And the client can set the header to any initial value; the "load balancing ip" (unless documented otherwise) probably creates-or-adds-to the header, rather than creates-or-replaces. > And then some connections don't hit my load balancing IP and go directly to > a specific origin server these connections can use $remote_addr. They can. But those connections might also have $http_x_forwarded_for. And $http_cf_connecting_ip. So you will need a reliable way of distinguishing between case#1 and case#2 and case#3, if you care about that. (Probably, the majority of "innocent" requests will not have spoofed headers. If that is good enough for what you are trying to achieve, then you're ok.) > My Solution / conclusion : > > How to come up with a fix that allows me to obtain the real IP in a dynamic > situation like this ? I would suggest one of: * go to extra measures to cause there to exist a new feature in nginx, such that the realip module will look at more than one header to determine the address to use or * recognise that if Cloudflare put in a CF-Connecting-IP header, they probably also put in a X-Forwarded-For header; ignore CF-Connecting-IP and just use the realip module with X-Forwarded-For. http://nginx.org/r/real_ip_header and the rest of that page. > I have solved my issue with the following. This will work, with the above caveats. If you have time to experiment, you may find that the realip module does something similar in a less fragile way. Cheers, f -- Francis Daly francis at daoine.org From simowitz at google.com Tue Mar 7 21:38:04 2017 From: simowitz at google.com (Jonathan Simowitz) Date: Tue, 7 Mar 2017 16:38:04 -0500 Subject: Passing $upstream_response_time in a header Message-ID: Hello, I have an nginx server that runs as reverse proxy and I would like to pass the $upstream_response_time value in a header. I find that when I do the value is actually a linux timestamp with millisecond resolution instead of a value of seconds with millisecond resolution. Apparently this is automatically converted when written to the logs. Is there a way to trigger the conversion for passing in a header? Thank you, ~Jonathan -- Jonathan Simowitz | Jigsaw | Software Engineer | simowitz at google.com | 631-223-8608 -------------- next part -------------- An HTML attachment was scrubbed... URL: From vl at nginx.com Tue Mar 7 21:58:06 2017 From: vl at nginx.com (Vladimir Homutov) Date: Wed, 8 Mar 2017 00:58:06 +0300 Subject: Nginx reverse proxy for TFTP UDP port 69 traffic In-Reply-To: <04D23F0F-B586-42A5-8C9E-AFA77F5ECEBB@gmail.com> References: <04D23F0F-B586-42A5-8C9E-AFA77F5ECEBB@gmail.com> Message-ID: <2fd0e5d5-bd51-cff5-64e9-fdb254f519ed@nginx.com> On 08.03.2017 00:21, Eric Feldhusen wrote: > I?m trying to use Nginx to reverse proxy TFTP UDP port 69 traffic and > I?m having a problem with getting files through the nginx reverse proxy. > > My configuration is simple, I?m running TFTP on one Centos 6.x server > and the Nginx reserve proxy on another Centos 6.x server with the latest > Nginx mainline 1.11.10 from the nginx.org repository. > > TFTP connections to the TFTP server directly work. Using the same > commands through the Nginx reverse proxy, connects, but will not > download or upload a file through it. > > If you have any suggestions, I?d appreciate a nudge in the right > direction. I?m assuming it?s something I?m missing. > > Eric Feldhusen Unfortunately, TFTP will not work, because it requires that after initial server's reply client will send packets to the port, chosen by server (i.e. not 69. but some auto-assigned). also, TFTP server recognizes clients by its source port and it changes when a packet passes proxy - each packet is originating from a new source port on proxy. From tolga.ceylan at gmail.com Tue Mar 7 22:10:24 2017 From: tolga.ceylan at gmail.com (Tolga Ceylan) Date: Tue, 7 Mar 2017 14:10:24 -0800 Subject: Reverse Proxy with 500k connections In-Reply-To: <14E44683-8D28-47A2-B371-7167A5F72A12@ultra-secure.de> References: <14E44683-8D28-47A2-B371-7167A5F72A12@ultra-secure.de> Message-ID: How about using split_clients "${remote_addr}AAA" $proxy_ip { 10% 192.168.1.10; 10% 192.168.1.11; ... * 192.168.1.19; } proxy_bind $proxy_ip; where $proxy_ip is populated via split clients module to spread the traffic to 10 internal IPs. or add 10 new listener ports (or ips) to your backend server instead, (and perhaps use least connected load balancing) in upstream {} set of 10 backends. eg: upstream backend { least_conn; server 192.168.1.21:443; server 192.168.1.21:444; server 192.168.1.21:445; server 192.168.1.21:446; server 192.168.1.21:447; server 192.168.1.21:448; server 192.168.1.21:449; server 192.168.1.21:450; server 192.168.1.21:451; server 192.168.1.21:452; } On Tue, Mar 7, 2017 at 1:21 PM, Rainer Duffner wrote: > > Am 07.03.2017 um 22:12 schrieb Nelson Marcos : > > Do you really need to use different source ips or it's a solution that you > picked? > > Also, is it a option to set the keepalive option in your upstream configure > section? > http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive > > > > > I?m not sure if you can proxy web socket connections like http-connections. > > After all, they are persistent (hence the large number of connections). > > Why can?t you (OP) do the upgrade to 1.10? I thought it?s the only > ?supported" version anyway? > > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From al-nginx at none.at Tue Mar 7 22:25:52 2017 From: al-nginx at none.at (Aleksandar Lazic) Date: Tue, 07 Mar 2017 23:25:52 +0100 Subject: =?UTF-8?B?UmU6IOWbnuWkje+8mlJlOl/lm57lpI3vvJpSZTpf5Zue5aSN77yaUmU6X+Wbng==?= =?UTF-8?B?5aSN77yaUmU6X0lzc3VlX2Fib3V0X25naW54X3JlbW92aW5nX3RoZV9oZWFk?= =?UTF-8?B?ZXJfIkNvbm5lY3Rpb24iX2luX0hUVFBfcmVzcG9uc2U/?= In-Reply-To: <20170307094910.43F30B000D0@webmail.sinamail.sina.com.cn> References: <20170307094910.43F30B000D0@webmail.sinamail.sina.com.cn> Message-ID: <902b60ba58740ab75c31f22379306670@none.at> Hi. Well that's a lot modules and lua stuff there. What's in the '*by_lua_file's ? Can you run from a specific IP the debug log to see what's happen in nginx? http://nginx.org/en/docs/debugging_log.html regards aleks Am 07-03-2017 10:49, schrieb tjlp at sina.com: > Hi, Aleks, > > The result of nginx -V is as follow: > nginx version: nginx/1.11.1 > built by gcc 4.9.2 (Debian 4.9.2-10) > built with OpenSSL 1.0.1t 3 May 2016 > TLS SNI support enabled > configure arguments: --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gzip_static_module --with-http_sub_module --with-http_v2_module --with-http_spdy_module --with-stream --with-stream_ssl_module --with-threads --with-file-aio --without-mail_pop3_module --without-mail_smtp_module --without-mail_imap_module --without-http_uwsgi_module --without-http_scgi_module --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --add-module=/tmp/build/ngx_devel_kit-0.3.0 --add-module=/tmp/build/set-misc-nginx-module-0.30 --add-module=/tmp/build/nginx-module-vts-0.1.9 --add-module=/tmp/build/lua-nginx-module-0.10.5 --add-module=/tmp/build/headers-more-nginx-module-0.30 --add-module=/tmp/build/nginx-goodies-nginx-sticky-module-ng-c78b7dd79d0d --add-module=/tmp/build/nginx-http-auth-digest-f85f5d6fdcc06002ff879f5cbce930999c287011 --add-module=/tmp/build/ngx_http_substitutions_filter_module-bc58cb11844bc42735bbaef7085ea86ace46d05b --add-module=/tmp/build/lua-upstream-nginx-module-0.05 > > The nginx conf is: > > daemon off; > > worker_processes 2; > > pid /run/nginx.pid; > > worker_rlimit_nofile 131072; > > pcre_jit on; > > events { > multi_accept on; > worker_connections 16384; > use epoll; > } > > http { > > lua_shared_dict server_sessioncnt_dict 20k; > lua_shared_dict server_dict 20k; > lua_shared_dict server_acceptnewconn_dict 20k; > lua_shared_dict sessionid_server_dict 100k; > > real_ip_header X-Forwarded-For; > set_real_ip_from 0.0.0.0/0; > real_ip_recursive on; > > geoip_country /etc/nginx/GeoIP.dat; > geoip_city /etc/nginx/GeoLiteCity.dat; > geoip_proxy_recursive on; > vhost_traffic_status_zone shared:vhost_traffic_status:10m; > vhost_traffic_status_filter_by_set_key $geoip_country_code country::*; > # lua section to return proper error codes when custom pages are used > lua_package_path '.?.lua;./etc/nginx/lua/?.lua;/etc/nginx/lua/vendor/lua-resty-http/lib/?.lua;/etc/nginx/lua/vendor/lua-resty-lrucache/lib/?.lua;/etc/nginx/lua/vendor/lua-resty-core/lib/?.lua;/etc/nginx/lua/vendor/lua-resty-balancer/lib/?.lua;'; > > init_by_lua_file /etc/nginx/lua/init_by_lua.lua; > > sendfile on; > aio threads; > tcp_nopush on; > tcp_nodelay on; > > log_subrequest on; > > reset_timedout_connection on; > > keepalive_timeout 75s; > > types_hash_max_size 2048; > server_names_hash_max_size 512; > server_names_hash_bucket_size 64; > > include /etc/nginx/mime.types; > default_type text/html; > gzip on; > gzip_comp_level 5; > gzip_http_version 1.1; > gzip_min_length 256; > gzip_types application/atom+xml application/javascript aplication/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component; > gzip_proxied any; > > client_max_body_size "64m"; > > log_format upstreaminfo '$remote_addr - ' > '[$proxy_add_x_forwarded_for] - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" ' > '$request_length $request_time $upstream_addr $upstream_response_length $upstream_response_time $upstream_status'; > > map $request $loggable { > default 1; > } > > access_log /var/log/nginx/access.log upstreaminfo if=$loggable; > error_log /var/log/nginx/error.log notice; > > map $http_upgrade $connection_upgrade { > default upgrade; > '' close; > } > > # trust http_x_forwarded_proto headers correctly indicate ssl offloading > map $http_x_forwarded_proto $pass_access_scheme { > default $http_x_forwarded_proto; > '' $scheme; > } > > # Map a response error watching the header Content-Type > map $http_accept $httpAccept { > default html; > application/json json; > application/xml xml; > text/plain text; > } > > map $httpAccept $httpReturnType { > default text/html; > json application/json; > xml application/xml; > text text/plain; > } > > server_name_in_redirect off; > port_in_redirect off; > > ssl_protocols TLSv1 TLSv1.1 TLSv1.2; > > # turn on session caching to drastically improve performance > > ssl_session_cache builtin:1000 shared:SSL:10m; > ssl_session_timeout 10m; > > # allow configuring ssl session tickets > ssl_session_tickets on; > > # slightly reduce the time-to-first-byte > ssl_buffer_size 4k; > > # allow configuring custom ssl ciphers > ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA'; > ssl_prefer_server_ciphers on; > > # In case of errors try the next upstream server before returning an error > proxy_next_upstream error timeout invalid_header http_502 http_503 http_504; > > upstream liupeng-sm-rte-svc-13080 { > server 172.77.69.10:13080; > server 172.77.87.9:13080; > > balancer_by_lua_file /etc/nginx/lua/balancer_by_lua.lua; > > } > > server { > server_name _; > listen 80; > listen 443 ssl spdy http2; > > # PEM sha: aad58c371e57f3c243a7c8143c17762c67a0f18a > ssl_certificate /etc/nginx-ssl/system-snake-oil-certificate.pem; > ssl_certificate_key /etc/nginx-ssl/system-snake-oil-certificate.pem; > > more_set_headers "Strict-Transport-Security: max-age=15724800; includeSubDomains; preload"; > > vhost_traffic_status_filter_by_set_key $geoip_country_code country::$server_name; > > location /SM/ui { > > proxy_set_header Host $host; > > # Pass Real IP > proxy_set_header X-Real-IP $remote_addr; > > # Allow websocket connections > proxy_set_header Upgrade $http_upgrade; > > proxy_set_header Connection ""; > > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header X-Forwarded-Host $host; > proxy_set_header X-Forwarded-Port $server_port; > proxy_set_header X-Forwarded-Proto $pass_access_scheme; > > # mitigate HTTPoxy Vulnerability > # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/ > proxy_set_header Proxy ""; > > proxy_connect_timeout 5s; > proxy_send_timeout 60s; > proxy_read_timeout 60s; > > proxy_redirect off; > > proxy_buffering off; > > proxy_http_version 1.1; > > proxy_pass http://liupeng-sm-rte-svc-13080; > > rewrite_by_lua_file /etc/nginx/lua/rewrite_by_lua.lua; > > header_filter_by_lua_file /etc/nginx/lua/header_filter_by_lua.lua; > > } > > } > } > > ----- ???? ----- > ????Aleksandar Lazic > ????tjlp at sina.com > ????nginx > ???Re:_???Re:_???Re:_???Re:_Issue_about_nginx_removing_the_header_"Connection"_in_HTTP_response? > ???2017?03?07? 15?39? > > Hi Liu Peng. > We still don't know your nginx version nor your config! > Cite from below: >> So now the standard Questions from me: >> What's the output of nginx -V ? >> What's your config? > regards > aleks > Am 07-03-2017 02:37, schrieb tjlp at sina.com: >> Hi, Alexks, >> >> I try your proposal and it doesn't work. Actually my issue is the same >> as this one >> http://stackoverflow.com/questions/5100971/nginx-and-proxy-pass-send-connection-close-headers. >> >> 1. I add "keeplive_request 0". The result is that the "Connection: >> close" header is sent to client for every response. That does not match >> my requirement. Our application decides whether to finish the >> application session using this header. >> >> 2. I add "proxy_pass_header Connection". Nginx keeps sending >> "Connection: keep-alive" header to client even the header is >> "Connection: close" from upstream server. >> >> Seems Nginx has some special handling for the Connection header in >> response. The openresty author suggests that the only way for changing >> response header change the nginx C code for this issue. See this issue: >> https://github.com/openresty/headers-more-nginx-module/issues/22#issuecomment-31585052. >> >> Thanks >> Liu Peng >> >> ----- ???? ----- >> ????Aleksandar Lazic >> ????tjlp at sina.com >> ????nginx >> ???Re:_???Re:_???Re:_Issue_about_nginx_removing_the_header_"Connection"_in_HTTP_response? >> ???2017?03?04? 17?22? >> >> Hi Liu Peng. >> Am 04-03-2017 09:12, schrieb tjlp at sina.com: >>> >>> Hi, Alexks, >>> >>> I don't want to hide the header. >>> My problem is that Nginx change the "Connection: close" header in the >>> reponse from upstream server to "Connction: keep-alive" and send to >>> client. I want to keep the original "Connection: close" header. >> Ah that's a clear question. >> It took us only 3 rounds to get to this clear question ;-) >> So now the standard Questions from me: >> What's the output of nginx -V ? >> What's your config? >> Maybe you have set 'keepalive' in the upstream config >> http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive >> or >> 'proxy_http_version 1.1;' >> http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_http_version >> as a last resort you can just pass the header with >> 'proxy_pass_header Connection;'. >> http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass_header >> Choose the solution which fit's to your demand. >> I can only guess due to the fact that we don't know your config. >> May I ask you to take a look into this document, which exists in >> several >> languages, thank you very much. >> http://www.catb.org/~esr/faqs/smart-questions.html >> Best regards >> Aleks >>> Thanks >>> Liu Peng >>> >>> ----- ???? ----- >>> ????Aleksandar Lazic >>> ????tjlp at sina.com >>> ????nginx >>> ???Re:_???Re:_Issue_about_nginx_removing_the_header_"Connection"_in_HTTP_response? >>> ???2017?03?03? 16?19? >>> Hi. >>> >>> then one directive upward. >>> >>> http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_hide_header >>> >>> Cheers >>> >>> aleks >>> >>> Am 03-03-2017 06:00, schrieb tjlp at sina.com: >>> >>>> Hi, >>>> >>>> What I mention is the header in response from backend server. Your >>>> answer about proxy_set_header is the "Connection" header in request. >>>> >>>> Thanks >>>> Liu Peng >>>> >>>> ----- ???? ----- >>>> ????Aleksandar Lazic >>>> ????nginx at nginx.org >>>> ????tjlp at sina.com >>>> ???Re: Issue about nginx removing the header "Connection" in HTTP >>>> response? >>>> ???2017?03?03? 06?25? >>>> >>>> Hi. >>>> Am 01-03-2017 08:29, schrieb tjlp at sina.com: >>>>> Hi, nginx guy, >>>>> >>>>> In our system, for some special requests, the upstream server will >>>>> return a response which the header includes "Connection: Close". >>>>> According to HTTP protocol, "Connection" is one-hop header. >>>>> So, nginx will remove this header and the client can't do the >>>>> business >>>>> logic correctly. >>>>> >>>>> How to handle this scenario? >>>> you mean something like this? >>>> http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_set_header >>>> If the value of a header field is an empty string then this field >>>> will >>>> not be passed to a proxied server: >>>> proxy_set_header Connection ""; >>>>> Thanks >>>>> Liu Peng >>>>> _______________________________________________ >>>>> nginx mailing list >>>>> nginx at nginx.org >>>>> http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Mar 7 22:45:30 2017 From: francis at daoine.org (Francis Daly) Date: Tue, 7 Mar 2017 22:45:30 +0000 Subject: Passing $upstream_response_time in a header In-Reply-To: References: Message-ID: <20170307224530.GF15209@daoine.org> On Tue, Mar 07, 2017 at 04:38:04PM -0500, Jonathan Simowitz via nginx wrote: Hi there, > I have an nginx server that runs as reverse proxy and I would like to pass > the $upstream_response_time value in a header. I find that when I do the > value is actually a linux timestamp with millisecond resolution instead of > a value of seconds with millisecond resolution. My reading of http://nginx.org/r/$upstream_response_time says that what you report should not be happening. Do you have a sample config / request / response which shows the problem? Cheers, f -- Francis Daly francis at daoine.org From efeldhusen.lists at gmail.com Tue Mar 7 22:54:49 2017 From: efeldhusen.lists at gmail.com (Eric Feldhusen) Date: Tue, 7 Mar 2017 17:54:49 -0500 Subject: Nginx reverse proxy for TFTP UDP port 69 traffic In-Reply-To: <2fd0e5d5-bd51-cff5-64e9-fdb254f519ed@nginx.com> References: <04D23F0F-B586-42A5-8C9E-AFA77F5ECEBB@gmail.com> <2fd0e5d5-bd51-cff5-64e9-fdb254f519ed@nginx.com> Message-ID: > On Mar 7, 2017, at 4:58 PM, Vladimir Homutov wrote: > > On 08.03.2017 00:21, Eric Feldhusen wrote: >> I?m trying to use Nginx to reverse proxy TFTP UDP port 69 traffic and >> I?m having a problem with getting files through the nginx reverse proxy. >> >> My configuration is simple, I?m running TFTP on one Centos 6.x server >> and the Nginx reserve proxy on another Centos 6.x server with the latest >> Nginx mainline 1.11.10 from the nginx.org repository. >> >> TFTP connections to the TFTP server directly work. Using the same >> commands through the Nginx reverse proxy, connects, but will not >> download or upload a file through it. >> >> If you have any suggestions, I?d appreciate a nudge in the right >> direction. I?m assuming it?s something I?m missing. >> >> Eric Feldhusen > > Unfortunately, TFTP will not work, because it requires > that after initial server's reply client will send packets > to the port, chosen by server (i.e. not 69. but some auto-assigned). > also, TFTP server recognizes clients by its source port and > it changes when a packet passes proxy - each packet is originating > from a new source port on proxy. Ah, I had just started to look up specifically how TFTP connections work, so I hadn?t seen this yet, but that makes sense with what I was seeing. Thank you for the quick reply, I appreciate it. Eric Feldhusen From defan at nginx.com Tue Mar 7 23:39:27 2017 From: defan at nginx.com (Andrei Belov) Date: Wed, 8 Mar 2017 02:39:27 +0300 Subject: Reverse Proxy with 500k connections In-Reply-To: References: <14E44683-8D28-47A2-B371-7167A5F72A12@ultra-secure.de> Message-ID: Yes, split_clients solution fits perfectly in the described use case. Also, nginx >= 1.11.4 has support for IP_BIND_ADDRESS_NO_PORT socket option ([1], [2]) on supported systems (Linux kernel >= 4.2, glibc >= 2.23) which may be helpful as well. Quote from [1]: [..] Add IP_BIND_ADDRESS_NO_PORT to overcome bind(0) limitations: When an application needs to force a source IP on an active TCP socket it has to use bind(IP, port=x). As most applications do not want to deal with already used ports, x is often set to 0, meaning the kernel is in charge to find an available port. But kernel does not know yet if this socket is going to be a listener or be connected. This patch adds a new SOL_IP socket option, asking kernel to ignore the 0 port provided by application in bind(IP, port=0) and only remember the given IP address. The port will be automatically chosen at connect() time, in a way that allows sharing a source port as long as the 4-tuples are unique. [..] [1] https://kernelnewbies.org/Linux_4.2#head-8ccffc90738ffcb0c20caa96bae6799694b8ba3a [2] https://git.kernel.org/torvalds/c/90c337da1524863838658078ec34241f45d8394d > On 08 Mar 2017, at 01:10, Tolga Ceylan wrote: > > How about using > > split_clients "${remote_addr}AAA" $proxy_ip { > 10% 192.168.1.10; > 10% 192.168.1.11; > ... > * 192.168.1.19; > } > > proxy_bind $proxy_ip; > > where $proxy_ip is populated via split clients module to spread the > traffic to 10 internal IPs. > > or add 10 new listener ports (or ips) to your backend server instead, > (and perhaps use least connected load balancing) in upstream {} set of > 10 backends. eg: > > upstream backend { > least_conn; > server 192.168.1.21:443; > server 192.168.1.21:444; > server 192.168.1.21:445; > server 192.168.1.21:446; > server 192.168.1.21:447; > server 192.168.1.21:448; > server 192.168.1.21:449; > server 192.168.1.21:450; > server 192.168.1.21:451; > server 192.168.1.21:452; > } > > > > > On Tue, Mar 7, 2017 at 1:21 PM, Rainer Duffner wrote: >> >> Am 07.03.2017 um 22:12 schrieb Nelson Marcos : >> >> Do you really need to use different source ips or it's a solution that you >> picked? >> >> Also, is it a option to set the keepalive option in your upstream configure >> section? >> http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive >> >> >> >> >> I?m not sure if you can proxy web socket connections like http-connections. >> >> After all, they are persistent (hence the large number of connections). >> >> Why can?t you (OP) do the upgrade to 1.10? I thought it?s the only >> ?supported" version anyway? >> >> >> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Tue Mar 7 23:44:05 2017 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Tue, 07 Mar 2017 18:44:05 -0500 Subject: Nginx Map how to check value if empty In-Reply-To: <20170307212359.GE15209@daoine.org> References: <20170307212359.GE15209@daoine.org> Message-ID: <6bca319b739893fcbc4734247e82370f.NginxMailingListEnglish@forum.nginx.org> Hey, I was just looking at the realip module but that module does not seem to support fallback methods like I demonstrated I was in need of. (If it does support multiple headers and fallback conditions can someone provide a demonstration) If real_ip_header CF-Connecting-IP; is missing then fallback to real_ip_header X-Forwarded-For; and if that header is missing use $binary_remote_addr; I guess to prevent spoofing what if we merge the map's with a IP header check map so we can keep our dynamic capabilities but check that only the matching whitelisted IP's / IP ranges may send one of those headers. If a non whitelisted IP sends one of those headers we fall back to $binary_remote_addr; making their spoofing pointless. That is the solution I can think of to prevent spoofing is to add to the map's unless anyone has better or known way's that could be simple or more easy. Francis Daly Wrote: ------------------------------------------------------- > On Mon, Mar 06, 2017 at 02:12:40PM -0500, c0nw0nk wrote: > > Hi there, > > good that you've found some more answers. > > There's still some to be worked on, though, I suspect. > > > So to explain how to get the origin IP for each method someone could > be > > using here is the list : > > > > Cloudflares proxied traffic : > > sets the header $http_cf_connecting_ip so use this header to get the > > Client's real IP > > Stock nginx has the realip module which will allow you to use a value > from one specific http header, as if it were the connecting address. > > And stock nginx knows that the client can set any header to any value, > so it can be configured to only believe the value if it was set by a > trusted source. (More or less). > > It looks like this $http_cf_connecting_ip contains a single IP > address, > which is the address of the thing that connected to Cloudflare -- > either > the client, or a proxy that it uses. And it can be trusted, if the > incoming request went through the Cloudflare reverse proxy. (And, > presumably, it is spoofed if the incoming request did not go through > the Cloudflare reverse proxy.) > > > Traffic from cloudflare via the DNS only connections : > > These would not have the $http_cf_connecting_ip header present. > > But those connections hit a load balancing ip what sets the header > > $http_x_forwarded_for header so that is the way to get the Clients > real ip > > via those connections. > > $http_x_forwarded_for is common enough; it can hold a list of IP > addresses. The realip module knows how to deal with it. > > Whatever method you use to read it, you should be aware that the > header is not necessarily exactly one IP address. And the client can > set the header to any initial value; the "load balancing ip" (unless > documented otherwise) probably creates-or-adds-to the header, rather > than creates-or-replaces. > > > And then some connections don't hit my load balancing IP and go > directly to > > a specific origin server these connections can use $remote_addr. > > They can. But those connections might also have $http_x_forwarded_for. > And > $http_cf_connecting_ip. So you will need a reliable way of > distinguishing > between case#1 and case#2 and case#3, if you care about that. > > (Probably, the majority of "innocent" requests will not have spoofed > headers. If that is good enough for what you are trying to achieve, > then you're ok.) > > > My Solution / conclusion : > > > > How to come up with a fix that allows me to obtain the real IP in a > dynamic > > situation like this ? > > I would suggest one of: > > * go to extra measures to cause there to exist a new feature in nginx, > such that the realip module will look at more than one header to > determine > the address to use > > or > > * recognise that if Cloudflare put in a CF-Connecting-IP header, they > probably also put in a X-Forwarded-For header; ignore CF-Connecting-IP > and just use the realip module with X-Forwarded-For. > > http://nginx.org/r/real_ip_header and the rest of that page. > > > I have solved my issue with the following. > > This will work, with the above caveats. > > If you have time to experiment, you may find that the realip module > does > something similar in a less fragile way. > > Cheers, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272744,272820#msg-272820 From tolga.ceylan at gmail.com Wed Mar 8 00:57:49 2017 From: tolga.ceylan at gmail.com (Tolga Ceylan) Date: Tue, 7 Mar 2017 16:57:49 -0800 Subject: Reverse Proxy with 500k connections In-Reply-To: References: <14E44683-8D28-47A2-B371-7167A5F72A12@ultra-secure.de> Message-ID: Of course, with split_clients, you are at the mercy of the hashing and hope that this distribution will spread work evenly based on incoming client address space and the duration of these connections, so you might run into the limits despite having enough port capacity. More importantly, in case of failures, your clients will see errors, since nginx will not retry (and even if it did, the hashing will land on the same exhausted port/ip set.) Upstream {} with multiple backends approach is a bit more robust as if the ports are ever exhausted, nginx can try the next upstream. And you can try to control this further by using least_conn backend selection. On Tue, Mar 7, 2017 at 3:39 PM, Andrei Belov wrote: > Yes, split_clients solution fits perfectly in the described use case. > > Also, nginx >= 1.11.4 has support for IP_BIND_ADDRESS_NO_PORT socket > option ([1], [2]) on supported systems (Linux kernel >= 4.2, glibc >= 2.23) which > may be helpful as well. > > Quote from [1]: > > [..] > Add IP_BIND_ADDRESS_NO_PORT to overcome bind(0) limitations: When an > application needs to force a source IP on an active TCP socket it has to use > bind(IP, port=x). As most applications do not want to deal with already used > ports, x is often set to 0, meaning the kernel is in charge to find an > available port. But kernel does not know yet if this socket is going to be a > listener or be connected. This patch adds a new SOL_IP socket option, asking > kernel to ignore the 0 port provided by application in bind(IP, port=0) and > only remember the given IP address. The port will be automatically chosen at > connect() time, in a way that allows sharing a source port as long as the > 4-tuples are unique. > [..] > > > [1] https://kernelnewbies.org/Linux_4.2#head-8ccffc90738ffcb0c20caa96bae6799694b8ba3a > [2] https://git.kernel.org/torvalds/c/90c337da1524863838658078ec34241f45d8394d > > >> On 08 Mar 2017, at 01:10, Tolga Ceylan wrote: >> >> How about using >> >> split_clients "${remote_addr}AAA" $proxy_ip { >> 10% 192.168.1.10; >> 10% 192.168.1.11; >> ... >> * 192.168.1.19; >> } >> >> proxy_bind $proxy_ip; >> >> where $proxy_ip is populated via split clients module to spread the >> traffic to 10 internal IPs. >> >> or add 10 new listener ports (or ips) to your backend server instead, >> (and perhaps use least connected load balancing) in upstream {} set of >> 10 backends. eg: >> >> upstream backend { >> least_conn; >> server 192.168.1.21:443; >> server 192.168.1.21:444; >> server 192.168.1.21:445; >> server 192.168.1.21:446; >> server 192.168.1.21:447; >> server 192.168.1.21:448; >> server 192.168.1.21:449; >> server 192.168.1.21:450; >> server 192.168.1.21:451; >> server 192.168.1.21:452; >> } >> >> >> >> >> On Tue, Mar 7, 2017 at 1:21 PM, Rainer Duffner wrote: >>> >>> Am 07.03.2017 um 22:12 schrieb Nelson Marcos : >>> >>> Do you really need to use different source ips or it's a solution that you >>> picked? >>> >>> Also, is it a option to set the keepalive option in your upstream configure >>> section? >>> http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive >>> >>> >>> >>> >>> I?m not sure if you can proxy web socket connections like http-connections. >>> >>> After all, they are persistent (hence the large number of connections). >>> >>> Why can?t you (OP) do the upgrade to 1.10? I thought it?s the only >>> ?supported" version anyway? >>> >>> >>> >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From tjlp at sina.com Wed Mar 8 00:58:49 2017 From: tjlp at sina.com (tjlp at sina.com) Date: Wed, 08 Mar 2017 08:58:49 +0800 Subject: =?UTF-8?B?5Zue5aSN77yaUmU6X+WbnuWkje+8mlJlOl/lm57lpI3vvJpSZTpf5Zue5aSN77ya?= =?UTF-8?B?UmU6X+WbnuWkje+8mlJlOl9Jc3N1ZV9hYm91dF9uZ2lueF9yZW1vdmluZ190?= =?UTF-8?B?aGVfaGVhZGVyXyJDb25uZWN0aW9uIl9pbl9IVFRQX3Jlc3BvbnNlPw==?= Message-ID: <20170308005849.67DE1B000D0@webmail.sinamail.sina.com.cn> Hi, Aleks, This nginx conf is generated by Kubernetes nginx ingress controller. We use the Nginx in the kubernetes cluster. So many modules are there. The lua script is supported by the open sourced OpenResty. You can google it to find how and why use it. We use it for our special load balancing. For the log, I am not sure what you need. Thanks ----- ???? ----- ????Aleksandar Lazic ????tjlp at sina.com ????nginx ???Re:_???Re:_???Re:_???Re:_???Re:_Issue_about_nginx_removing_the_header_"Connection"_in_HTTP_response? ???2017?03?08? 06?26? Hi. Well that's a lot modules and lua stuff there. What's in the '*by_lua_file's ? Can you run from a specific IP the debug log to see what's happen in nginx? http://nginx.org/en/docs/debugging_log.html regards aleks Am 07-03-2017 10:49, schrieb tjlp at sina.com: Hi, Aleks, The result of nginx -V is as follow: nginx version: nginx/1.11.1 built by gcc 4.9.2 (Debian 4.9.2-10) built with OpenSSL 1.0.1t 3 May 2016 TLS SNI support enabled configure arguments: --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gzip_static_module --with-http_sub_module --with-http_v2_module --with-http_spdy_module --with-stream --with-stream_ssl_module --with-threads --with-file-aio --without-mail_pop3_module --without-mail_smtp_module --without-mail_imap_module --without-http_uwsgi_module --without-http_scgi_module --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --add-module=/tmp/build/ngx_devel_kit-0.3.0 --add-module=/tmp/build/set-misc-nginx-module-0.30 --add-module=/tmp/build/nginx-module-vts-0.1.9 --add-module=/tmp/build/lua-nginx-module-0.10.5 --add-module=/tmp/build/headers-more-nginx-module-0.30 --add-module=/tmp/build/nginx-goodies-nginx-sticky-module-ng-c78b7dd79d0d --add-module=/tmp/build/nginx-http-auth-digest-f85f5d6fdcc06002ff879f5cbce930999c287011 --add-module=/tmp/build/ngx_http_substitutions_filter_module-bc58cb11844bc42735bbaef7085ea86ace46d05b --add-module=/tmp/build/lua-upstream-nginx-module-0.05 The nginx conf is: daemon off; worker_processes 2; pid /run/nginx.pid; worker_rlimit_nofile 131072; pcre_jit on; events { multi_accept on; worker_connections 16384; use epoll; } http { lua_shared_dict server_sessioncnt_dict 20k; lua_shared_dict server_dict 20k; lua_shared_dict server_acceptnewconn_dict 20k; lua_shared_dict sessionid_server_dict 100k; real_ip_header X-Forwarded-For; set_real_ip_from 0.0.0.0/0; real_ip_recursive on; geoip_country /etc/nginx/GeoIP.dat; geoip_city /etc/nginx/GeoLiteCity.dat; geoip_proxy_recursive on; vhost_traffic_status_zone shared:vhost_traffic_status:10m; vhost_traffic_status_filter_by_set_key $geoip_country_code country::*; # lua section to return proper error codes when custom pages are used lua_package_path '.?.lua;./etc/nginx/lua/?.lua;/etc/nginx/lua/vendor/lua-resty-http/lib/?.lua;/etc/nginx/lua/vendor/lua-resty-lrucache/lib/?.lua;/etc/nginx/lua/vendor/lua-resty-core/lib/?.lua;/etc/nginx/lua/vendor/lua-resty-balancer/lib/?.lua;'; init_by_lua_file /etc/nginx/lua/init_by_lua.lua; sendfile on; aio threads; tcp_nopush on; tcp_nodelay on; log_subrequest on; reset_timedout_connection on; keepalive_timeout 75s; types_hash_max_size 2048; server_names_hash_max_size 512; server_names_hash_bucket_size 64; include /etc/nginx/mime.types; default_type text/html; gzip on; gzip_comp_level 5; gzip_http_version 1.1; gzip_min_length 256; gzip_types application/atom+xml application/javascript aplication/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component; gzip_proxied any; client_max_body_size "64m"; log_format upstreaminfo '$remote_addr - ' '[$proxy_add_x_forwarded_for] - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" ' '$request_length $request_time $upstream_addr $upstream_response_length $upstream_response_time $upstream_status'; map $request $loggable { default 1; } access_log /var/log/nginx/access.log upstreaminfo if=$loggable; error_log /var/log/nginx/error.log notice; map $http_upgrade $connection_upgrade { default upgrade; '' close; } # trust http_x_forwarded_proto headers correctly indicate ssl offloading map $http_x_forwarded_proto $pass_access_scheme { default $http_x_forwarded_proto; '' $scheme; } # Map a response error watching the header Content-Type map $http_accept $httpAccept { default html; application/json json; application/xml xml; text/plain text; } map $httpAccept $httpReturnType { default text/html; json application/json; xml application/xml; text text/plain; } server_name_in_redirect off; port_in_redirect off; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # turn on session caching to drastically improve performance ssl_session_cache builtin:1000 shared:SSL:10m; ssl_session_timeout 10m; # allow configuring ssl session tickets ssl_session_tickets on; # slightly reduce the time-to-first-byte ssl_buffer_size 4k; # allow configuring custom ssl ciphers ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA'; ssl_prefer_server_ciphers on; # In case of errors try the next upstream server before returning an error proxy_next_upstream error timeout invalid_header http_502 http_503 http_504; upstream liupeng-sm-rte-svc-13080 { server 172.77.69.10:13080; server 172.77.87.9:13080; balancer_by_lua_file /etc/nginx/lua/balancer_by_lua.lua; } server { server_name _; listen 80; listen 443 ssl spdy http2; # PEM sha: aad58c371e57f3c243a7c8143c17762c67a0f18a ssl_certificate /etc/nginx-ssl/system-snake-oil-certificate.pem; ssl_certificate_key /etc/nginx-ssl/system-snake-oil-certificate.pem; more_set_headers "Strict-Transport-Security: max-age=15724800; includeSubDomains; preload"; vhost_traffic_status_filter_by_set_key $geoip_country_code country::$server_name; location /SM/ui { proxy_set_header Host $host; # Pass Real IP proxy_set_header X-Real-IP $remote_addr; # Allow websocket connections proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection ""; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Port $server_port; proxy_set_header X-Forwarded-Proto $pass_access_scheme; # mitigate HTTPoxy Vulnerability # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/ proxy_set_header Proxy ""; proxy_connect_timeout 5s; proxy_send_timeout 60s; proxy_read_timeout 60s; proxy_redirect off; proxy_buffering off; proxy_http_version 1.1; proxy_pass http://liupeng-sm-rte-svc-13080; rewrite_by_lua_file /etc/nginx/lua/rewrite_by_lua.lua; header_filter_by_lua_file /etc/nginx/lua/header_filter_by_lua.lua; } } } ----- ???? ----- ????Aleksandar Lazic ????tjlp at sina.com ????nginx ???Re:_???Re:_???Re:_???Re:_Issue_about_nginx_removing_the_header_"Connection"_in_HTTP_response? ???2017?03?07? 15?39? Hi Liu Peng. We still don't know your nginx version nor your config! Cite from below: > So now the standard Questions from me: > What's the output of nginx -V ? > What's your config? regards aleks Am 07-03-2017 02:37, schrieb tjlp at sina.com: > Hi, Alexks, > > I try your proposal and it doesn't work. Actually my issue is the same > as this one > http://stackoverflow.com/questions/5100971/nginx-and-proxy-pass-send-connection-close-headers. > > 1. I add "keeplive_request 0". The result is that the "Connection: > close" header is sent to client for every response. That does not match > my requirement. Our application decides whether to finish the > application session using this header. > > 2. I add "proxy_pass_header Connection". Nginx keeps sending > "Connection: keep-alive" header to client even the header is > "Connection: close" from upstream server. > > Seems Nginx has some special handling for the Connection header in > response. The openresty author suggests that the only way for changing > response header change the nginx C code for this issue. See this issue: > https://github.com/openresty/headers-more-nginx-module/issues/22#issuecomment-31585052. > > Thanks > Liu Peng > > ----- ???? ----- > ????Aleksandar Lazic > ????tjlp at sina.com > ????nginx > ???Re:_???Re:_???Re:_Issue_about_nginx_removing_the_header_"Connection"_in_HTTP_response? > ???2017?03?04? 17?22? > > Hi Liu Peng. > Am 04-03-2017 09:12, schrieb tjlp at sina.com: >> >> Hi, Alexks, >> >> I don't want to hide the header. >> My problem is that Nginx change the "Connection: close" header in the >> reponse from upstream server to "Connction: keep-alive" and send to >> client. I want to keep the original "Connection: close" header. > Ah that's a clear question. > It took us only 3 rounds to get to this clear question ;-) > So now the standard Questions from me: > What's the output of nginx -V ? > What's your config? > Maybe you have set 'keepalive' in the upstream config > http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive > or > 'proxy_http_version 1.1;' > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_http_version > as a last resort you can just pass the header with > 'proxy_pass_header Connection;'. > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass_header > Choose the solution which fit's to your demand. > I can only guess due to the fact that we don't know your config. > May I ask you to take a look into this document, which exists in > several > languages, thank you very much. > http://www.catb.org/~esr/faqs/smart-questions.html > Best regards > Aleks >> Thanks >> Liu Peng >> >> ----- ???? ----- >> ????Aleksandar Lazic >> ????tjlp at sina.com >> ????nginx >> ???Re:_???Re:_Issue_about_nginx_removing_the_header_"Connection"_in_HTTP_response? >> ???2017?03?03? 16?19? >> Hi. >> >> then one directive upward. >> >> http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_hide_header >> >> Cheers >> >> aleks >> >> Am 03-03-2017 06:00, schrieb tjlp at sina.com: >> >>> Hi, >>> >>> What I mention is the header in response from backend server. Your >>> answer about proxy_set_header is the "Connection" header in request. >>> >>> Thanks >>> Liu Peng >>> >>> ----- ???? ----- >>> ????Aleksandar Lazic >>> ????nginx at nginx.org >>> ????tjlp at sina.com >>> ???Re: Issue about nginx removing the header "Connection" in HTTP >>> response? >>> ???2017?03?03? 06?25? >>> >>> Hi. >>> Am 01-03-2017 08:29, schrieb tjlp at sina.com: >>>> Hi, nginx guy, >>>> >>>> In our system, for some special requests, the upstream server will >>>> return a response which the header includes "Connection: Close". >>>> According to HTTP protocol, "Connection" is one-hop header. >>>> So, nginx will remove this header and the client can't do the >>>> business >>>> logic correctly. >>>> >>>> How to handle this scenario? >>> you mean something like this? >>> http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_set_header >>> If the value of a header field is an empty string then this field >>> will >>> not be passed to a proxied server: >>> proxy_set_header Connection ""; >>>> Thanks >>>> Liu Peng >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From tolga.ceylan at gmail.com Wed Mar 8 01:21:34 2017 From: tolga.ceylan at gmail.com (Tolga Ceylan) Date: Tue, 7 Mar 2017 17:21:34 -0800 Subject: keepalive_requests default 100 Message-ID: Does anybody have any history/rationale on why keepalive_requests use default of 100 requests in nginx? This same default is also used in Apache. But the default seems very small in today's standards. http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_requests Regards, Tolga From nginx-forum at forum.nginx.org Wed Mar 8 06:56:04 2017 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Wed, 08 Mar 2017 01:56:04 -0500 Subject: Nginx Map how to check value if empty In-Reply-To: <6bca319b739893fcbc4734247e82370f.NginxMailingListEnglish@forum.nginx.org> References: <20170307212359.GE15209@daoine.org> <6bca319b739893fcbc4734247e82370f.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hey again, So I modified my config to this as to prevent client's IP spoofing. map $http_x_forwarded_for $client_ip_x_forwarded_for { "" $remote_addr; #if this header missing set remote_addr as real ip default $http_x_forwarded_for; } map $http_cf_connecting_ip $client_ip_from_cf { "" $client_ip_x_forwarded_for; #if this header missing set x-forwarded-for as real ip default $http_cf_connecting_ip; } map $remote_addr $client_ip_output { 192.168.0.12 $client_ip_from_cf; #if ip match then it is a trusted ip (whitelist) 192.168.0.13 $client_ip_from_cf; #if ip match then it is a trusted ip (whitelist) 192.168.0.14 $client_ip_from_cf; #if ip match then it is a trusted ip (whitelist) 192.168.0.15 $client_ip_from_cf; #if ip match then it is a trusted ip (whitelist) default $remote_addr; #if none of ip matched then default to remote_addr } Don't know if i can do ranges in my map example above so I made a geo map too doing the exact same thing in case it may be better suited. geo $remote_addr $client_ip_output { #most likely should use geo for the range capability and other benefits 127.0.0.1 $client_ip_from_cf; #if ip match then it is a trusted ip (whitelist) 192.168.1.0/24 $client_ip_from_cf; #if ip match then it is a trusted ip (whitelist) 10.1.0.0/16 $client_ip_from_cf; #if ip match then it is a trusted ip (whitelist) ::1 $client_ip_from_cf; #if ip match then it is a trusted ip (whitelist) 2001:0db8::/32 $client_ip_from_cf; #if ip match then it is a trusted ip (whitelist) default $remote_addr; #if none of ip matched then default to remote_addr } The usage of the final output is as easy as this. "$client_ip_output;" limit_req_zone $client_ip_output zone=one:10m rate=1r/s; #usage example for the resulting output after all fallback checks and ip whitelist checks etc. Based of what I showed here can anyone point out any problems they see with it etc from what I want to achieve I think it should work fine. IP whitelist basis, Fallback header's in case one is missing it goes to the next header for the realip, If the next header is missing it goes to the next until no more potential realip headers exist so we set their IP as their connection $remote_addr. Be nice if the realip module did this but lucky we don't need the realip module this shows and can do so with map's and geo maps. c0nw0nk Wrote: ------------------------------------------------------- > Hey, > > I was just looking at the realip module but that module does not seem > to support fallback methods like I demonstrated I was in need of. (If > it does support multiple headers and fallback conditions can someone > provide a demonstration) > > If real_ip_header CF-Connecting-IP; is missing then fallback to > real_ip_header X-Forwarded-For; and if that header is missing use > $binary_remote_addr; > > I guess to prevent spoofing what if we merge the map's with a IP > header check map so we can keep our dynamic capabilities but check > that only the matching whitelisted IP's / IP ranges may send one of > those headers. > If a non whitelisted IP sends one of those headers we fall back to > $binary_remote_addr; making their spoofing pointless. > > That is the solution I can think of to prevent spoofing is to add to > the map's unless anyone has better or known way's that could be simple > or more easy. > > Francis Daly Wrote: > ------------------------------------------------------- > > On Mon, Mar 06, 2017 at 02:12:40PM -0500, c0nw0nk wrote: > > > > Hi there, > > > > good that you've found some more answers. > > > > There's still some to be worked on, though, I suspect. > > > > > So to explain how to get the origin IP for each method someone > could > > be > > > using here is the list : > > > > > > Cloudflares proxied traffic : > > > sets the header $http_cf_connecting_ip so use this header to get > the > > > Client's real IP > > > > Stock nginx has the realip module which will allow you to use a > value > > from one specific http header, as if it were the connecting > address. > > > > And stock nginx knows that the client can set any header to any > value, > > so it can be configured to only believe the value if it was set by > a > > trusted source. (More or less). > > > > It looks like this $http_cf_connecting_ip contains a single IP > > address, > > which is the address of the thing that connected to Cloudflare -- > > either > > the client, or a proxy that it uses. And it can be trusted, if the > > incoming request went through the Cloudflare reverse proxy. (And, > > presumably, it is spoofed if the incoming request did not go > through > > the Cloudflare reverse proxy.) > > > > > Traffic from cloudflare via the DNS only connections : > > > These would not have the $http_cf_connecting_ip header present. > > > But those connections hit a load balancing ip what sets the > header > > > $http_x_forwarded_for header so that is the way to get the > Clients > > real ip > > > via those connections. > > > > $http_x_forwarded_for is common enough; it can hold a list of IP > > addresses. The realip module knows how to deal with it. > > > > Whatever method you use to read it, you should be aware that the > > header is not necessarily exactly one IP address. And the client > can > > set the header to any initial value; the "load balancing ip" > (unless > > documented otherwise) probably creates-or-adds-to the header, > rather > > than creates-or-replaces. > > > > > And then some connections don't hit my load balancing IP and go > > directly to > > > a specific origin server these connections can use $remote_addr. > > > > They can. But those connections might also have > $http_x_forwarded_for. > > And > > $http_cf_connecting_ip. So you will need a reliable way of > > distinguishing > > between case#1 and case#2 and case#3, if you care about that. > > > > (Probably, the majority of "innocent" requests will not have > spoofed > > headers. If that is good enough for what you are trying to achieve, > > then you're ok.) > > > > > My Solution / conclusion : > > > > > > How to come up with a fix that allows me to obtain the real IP in > a > > dynamic > > > situation like this ? > > > > I would suggest one of: > > > > * go to extra measures to cause there to exist a new feature in > nginx, > > such that the realip module will look at more than one header to > > determine > > the address to use > > > > or > > > > * recognise that if Cloudflare put in a CF-Connecting-IP header, > they > > probably also put in a X-Forwarded-For header; ignore > CF-Connecting-IP > > and just use the realip module with X-Forwarded-For. > > > > http://nginx.org/r/real_ip_header and the rest of that page. > > > > > I have solved my issue with the following. > > > > This will work, with the above caveats. > > > > If you have time to experiment, you may find that the realip module > > does > > something similar in a less fragile way. > > > > Cheers, > > > > f > > -- > > Francis Daly francis at daoine.org > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272744,272824#msg-272824 From reallfqq-nginx at yahoo.fr Wed Mar 8 09:39:40 2017 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 8 Mar 2017 10:39:40 +0100 Subject: Nginx Map how to check value if empty In-Reply-To: References: <20170307212359.GE15209@daoine.org> <6bca319b739893fcbc4734247e82370f.NginxMailingListEnglish@forum.nginx.org> Message-ID: This kind of logic, as you found out, can be handled in nginx with the help of the proper tools, namely the map module. You are one step away: you can actually program what you require to be feeding the realip module with the HTTP header name you ended up with. Rather than having contiguous maps, head towards some cascading ones. Have a look at the following example, specifically the precedence it enforces and the fallback value (as needed) it provides. map $http_x_foo $fooHeader { default X-Foo; "" "Nothing"; } map $http_x_bar $myValue { default X-Bar; "" $fooHeader; } ?This is scalable, configuration-wise.? However, even if nginx is efficient at processing, it is always best to keep in mind those maps will be evaluated for each request. --- *B. R.* On Wed, Mar 8, 2017 at 7:56 AM, c0nw0nk wrote: > Hey again, > > So I modified my config to this as to prevent client's IP spoofing. > > > map $http_x_forwarded_for $client_ip_x_forwarded_for { > "" $remote_addr; #if this header missing set remote_addr as real ip > default $http_x_forwarded_for; > } > map $http_cf_connecting_ip $client_ip_from_cf { > "" $client_ip_x_forwarded_for; #if this header missing set x-forwarded-for > as real ip > default $http_cf_connecting_ip; > } > map $remote_addr $client_ip_output { > 192.168.0.12 $client_ip_from_cf; #if ip match then it is a trusted ip > (whitelist) > 192.168.0.13 $client_ip_from_cf; #if ip match then it is a trusted ip > (whitelist) > 192.168.0.14 $client_ip_from_cf; #if ip match then it is a trusted ip > (whitelist) > 192.168.0.15 $client_ip_from_cf; #if ip match then it is a trusted ip > (whitelist) > > default $remote_addr; #if none of ip matched then default to remote_addr > } > > > > > Don't know if i can do ranges in my map example above so I made a geo map > too doing the exact same thing in case it may be better suited. > > geo $remote_addr $client_ip_output { #most likely should use geo for the > range capability and other benefits > 127.0.0.1 $client_ip_from_cf; #if ip match then it is a trusted ip > (whitelist) > 192.168.1.0/24 $client_ip_from_cf; #if ip match then it is a trusted ip > (whitelist) > 10.1.0.0/16 $client_ip_from_cf; #if ip match then it is a trusted ip > (whitelist) > ::1 $client_ip_from_cf; #if ip match then it is a trusted ip > (whitelist) > 2001:0db8::/32 $client_ip_from_cf; #if ip match then it is a trusted ip > (whitelist) > > default $remote_addr; #if none of ip matched then default to remote_addr > } > > > The usage of the final output is as easy as this. "$client_ip_output;" > limit_req_zone $client_ip_output zone=one:10m rate=1r/s; #usage example for > the resulting output after all fallback checks and ip whitelist checks etc. > > > Based of what I showed here can anyone point out any problems they see with > it etc from what I want to achieve I think it should work fine. > > IP whitelist basis, Fallback header's in case one is missing it goes to the > next header for the realip, If the next header is missing it goes to the > next until no more potential realip headers exist so we set their IP as > their connection $remote_addr. > > Be nice if the realip module did this but lucky we don't need the realip > module this shows and can do so with map's and geo maps. > > > c0nw0nk Wrote: > ------------------------------------------------------- > > Hey, > > > > I was just looking at the realip module but that module does not seem > > to support fallback methods like I demonstrated I was in need of. (If > > it does support multiple headers and fallback conditions can someone > > provide a demonstration) > > > > If real_ip_header CF-Connecting-IP; is missing then fallback to > > real_ip_header X-Forwarded-For; and if that header is missing use > > $binary_remote_addr; > > > > I guess to prevent spoofing what if we merge the map's with a IP > > header check map so we can keep our dynamic capabilities but check > > that only the matching whitelisted IP's / IP ranges may send one of > > those headers. > > If a non whitelisted IP sends one of those headers we fall back to > > $binary_remote_addr; making their spoofing pointless. > > > > That is the solution I can think of to prevent spoofing is to add to > > the map's unless anyone has better or known way's that could be simple > > or more easy. > > > > Francis Daly Wrote: > > ------------------------------------------------------- > > > On Mon, Mar 06, 2017 at 02:12:40PM -0500, c0nw0nk wrote: > > > > > > Hi there, > > > > > > good that you've found some more answers. > > > > > > There's still some to be worked on, though, I suspect. > > > > > > > So to explain how to get the origin IP for each method someone > > could > > > be > > > > using here is the list : > > > > > > > > Cloudflares proxied traffic : > > > > sets the header $http_cf_connecting_ip so use this header to get > > the > > > > Client's real IP > > > > > > Stock nginx has the realip module which will allow you to use a > > value > > > from one specific http header, as if it were the connecting > > address. > > > > > > And stock nginx knows that the client can set any header to any > > value, > > > so it can be configured to only believe the value if it was set by > > a > > > trusted source. (More or less). > > > > > > It looks like this $http_cf_connecting_ip contains a single IP > > > address, > > > which is the address of the thing that connected to Cloudflare -- > > > either > > > the client, or a proxy that it uses. And it can be trusted, if the > > > incoming request went through the Cloudflare reverse proxy. (And, > > > presumably, it is spoofed if the incoming request did not go > > through > > > the Cloudflare reverse proxy.) > > > > > > > Traffic from cloudflare via the DNS only connections : > > > > These would not have the $http_cf_connecting_ip header present. > > > > But those connections hit a load balancing ip what sets the > > header > > > > $http_x_forwarded_for header so that is the way to get the > > Clients > > > real ip > > > > via those connections. > > > > > > $http_x_forwarded_for is common enough; it can hold a list of IP > > > addresses. The realip module knows how to deal with it. > > > > > > Whatever method you use to read it, you should be aware that the > > > header is not necessarily exactly one IP address. And the client > > can > > > set the header to any initial value; the "load balancing ip" > > (unless > > > documented otherwise) probably creates-or-adds-to the header, > > rather > > > than creates-or-replaces. > > > > > > > And then some connections don't hit my load balancing IP and go > > > directly to > > > > a specific origin server these connections can use $remote_addr. > > > > > > They can. But those connections might also have > > $http_x_forwarded_for. > > > And > > > $http_cf_connecting_ip. So you will need a reliable way of > > > distinguishing > > > between case#1 and case#2 and case#3, if you care about that. > > > > > > (Probably, the majority of "innocent" requests will not have > > spoofed > > > headers. If that is good enough for what you are trying to achieve, > > > then you're ok.) > > > > > > > My Solution / conclusion : > > > > > > > > How to come up with a fix that allows me to obtain the real IP in > > a > > > dynamic > > > > situation like this ? > > > > > > I would suggest one of: > > > > > > * go to extra measures to cause there to exist a new feature in > > nginx, > > > such that the realip module will look at more than one header to > > > determine > > > the address to use > > > > > > or > > > > > > * recognise that if Cloudflare put in a CF-Connecting-IP header, > > they > > > probably also put in a X-Forwarded-For header; ignore > > CF-Connecting-IP > > > and just use the realip module with X-Forwarded-For. > > > > > > http://nginx.org/r/real_ip_header and the rest of that page. > > > > > > > I have solved my issue with the following. > > > > > > This will work, with the above caveats. > > > > > > If you have time to experiment, you may find that the realip module > > > does > > > something similar in a less fragile way. > > > > > > Cheers, > > > > > > f > > > -- > > > Francis Daly francis at daoine.org > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,272744,272824#msg-272824 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Wed Mar 8 10:36:22 2017 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 8 Mar 2017 11:36:22 +0100 Subject: keepalive_requests default 100 In-Reply-To: References: Message-ID: I suspect nginx' team chose this value for the very reason it was adapted to the use of Apache (remember that nginx is, since its beginning, largely used as a reverse Web proxy in front of Apache farms). I guess the intent here is to probably mimic Apache behavior by default so adoption of that technology is eased. In the Apache world, long-lasting connections may result in resource starvation, since 1 connection = expensive resources allocated. Thus, you can quickly end-up with a few (relatively and even infrequently) active clients saturating a Web server machine, leaving no spots free for newcomers, especially nowadays with multithreaded/processed browsers. Recycling connections often is an attempt at fighting that. Unless I am overlooking something, there seems not to have any technical ground for such a limit to be pertinent with nginx. As debated in the past on this ML (and at the 1st nginx conference), some other default values might not be right anymore, but changing default value might silently break backwards-compatibility with setups which do not comply with the new defaults. History repeatedly shows that changing old defaults and even deprecating the use of rotten algorithms is faced with huge resistance. IE6? SSLv3? RC4?, HTTP/1.0? Amongst many others... Thus the nginx team position has (so far?) been to never touch the built-in defaults (and neither maybe the default nginx.conf? I have not checked that) and let people override them through configuration. --- *B. R.* On Wed, Mar 8, 2017 at 2:21 AM, Tolga Ceylan wrote: > Does anybody have any history/rationale on why keepalive_requests > use default of 100 requests in nginx? This same default is also used in > Apache. But the default seems very small in today's standards. > > http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_requests > > Regards, > Tolga > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxim at nginx.com Wed Mar 8 11:18:46 2017 From: maxim at nginx.com (Maxim Konovalov) Date: Wed, 8 Mar 2017 14:18:46 +0300 Subject: Reverse Proxy with 500k connections In-Reply-To: References: <14E44683-8D28-47A2-B371-7167A5F72A12@ultra-secure.de> Message-ID: <76657f90-c47a-6fc5-901c-9d1602426ec8@nginx.com> On 3/8/17 3:57 AM, Tolga Ceylan wrote: > Of course, with split_clients, you are at the mercy of the hashing and > hope that this distribution will spread work > evenly based on incoming client address space and the duration of > these connections, so you might run into > the limits despite having enough port capacity. More importantly, in > case of failures, your clients will see > errors, since nginx will not retry (and even if it did, the hashing > will land on the same exhausted port/ip set.) > IP_BIND_ADDRESS_NO_PORT in fresh linux kernels made the trick for nginx. This is basically why we added it not that recently. You can find patches that work around without this feature though. > Upstream {} with multiple backends approach is a bit more robust as if > the ports are ever exhausted, nginx > can try the next upstream. And you can try to control this further by > using least_conn backend selection. > > > On Tue, Mar 7, 2017 at 3:39 PM, Andrei Belov wrote: >> Yes, split_clients solution fits perfectly in the described use case. >> >> Also, nginx >= 1.11.4 has support for IP_BIND_ADDRESS_NO_PORT socket >> option ([1], [2]) on supported systems (Linux kernel >= 4.2, glibc >= 2.23) which >> may be helpful as well. >> >> Quote from [1]: >> >> [..] >> Add IP_BIND_ADDRESS_NO_PORT to overcome bind(0) limitations: When an >> application needs to force a source IP on an active TCP socket it has to use >> bind(IP, port=x). As most applications do not want to deal with already used >> ports, x is often set to 0, meaning the kernel is in charge to find an >> available port. But kernel does not know yet if this socket is going to be a >> listener or be connected. This patch adds a new SOL_IP socket option, asking >> kernel to ignore the 0 port provided by application in bind(IP, port=0) and >> only remember the given IP address. The port will be automatically chosen at >> connect() time, in a way that allows sharing a source port as long as the >> 4-tuples are unique. >> [..] >> >> >> [1] https://kernelnewbies.org/Linux_4.2#head-8ccffc90738ffcb0c20caa96bae6799694b8ba3a >> [2] https://git.kernel.org/torvalds/c/90c337da1524863838658078ec34241f45d8394d >> >> >>> On 08 Mar 2017, at 01:10, Tolga Ceylan wrote: >>> >>> How about using >>> >>> split_clients "${remote_addr}AAA" $proxy_ip { >>> 10% 192.168.1.10; >>> 10% 192.168.1.11; >>> ... >>> * 192.168.1.19; >>> } >>> >>> proxy_bind $proxy_ip; >>> >>> where $proxy_ip is populated via split clients module to spread the >>> traffic to 10 internal IPs. >>> >>> or add 10 new listener ports (or ips) to your backend server instead, >>> (and perhaps use least connected load balancing) in upstream {} set of >>> 10 backends. eg: >>> >>> upstream backend { >>> least_conn; >>> server 192.168.1.21:443; >>> server 192.168.1.21:444; >>> server 192.168.1.21:445; >>> server 192.168.1.21:446; >>> server 192.168.1.21:447; >>> server 192.168.1.21:448; >>> server 192.168.1.21:449; >>> server 192.168.1.21:450; >>> server 192.168.1.21:451; >>> server 192.168.1.21:452; >>> } >>> >>> >>> >>> >>> On Tue, Mar 7, 2017 at 1:21 PM, Rainer Duffner wrote: >>>> >>>> Am 07.03.2017 um 22:12 schrieb Nelson Marcos : >>>> >>>> Do you really need to use different source ips or it's a solution that you >>>> picked? >>>> >>>> Also, is it a option to set the keepalive option in your upstream configure >>>> section? >>>> http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive >>>> >>>> >>>> >>>> >>>> I?m not sure if you can proxy web socket connections like http-connections. >>>> >>>> After all, they are persistent (hence the large number of connections). >>>> >>>> Why can?t you (OP) do the upgrade to 1.10? I thought it?s the only >>>> ?supported" version anyway? >>>> >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Maxim Konovalov From maxim at nginx.com Wed Mar 8 11:20:58 2017 From: maxim at nginx.com (Maxim Konovalov) Date: Wed, 8 Mar 2017 14:20:58 +0300 Subject: Reverse Proxy with 500k connections In-Reply-To: References: Message-ID: On 3/7/17 10:50 PM, larsg wrote: > Hi, > > we are operating native nginx 1.8.1 on RHEL as a reverse proxy. > The nginx routes requests to a backend server that can be reached from the > proxy via a single internal IP address. > We have to support a large number of concurrent websocket connections - say > 100k to 500k. > > As we don't want to increase the number of proxy instances (with different > IPs) and we cannot use the "proxy_bind transarent" option (was introduced in > a later nginx release, upgrade is not possible) we wanted to configure the > nginx to use different source IPs then routing to the backend. Thus, we want > nginx to select an available source ip + source port when a connection is > established with the backend. > > For that we assigned ten internal IPs to the proxy server and used the > proxy_bind directive bound to 0.0.0.0. > But this approach seems not to work. The nginx instance seems always use the > first IP as source IP. > Using multiple proxy_bind's is not possible. > > So my question is: How can I configure nginx to select from a pool of source > IPs? Or generally: to overcome the 64k problem? > We ever wrote a blog post for you! https://www.nginx.com/blog/overcoming-ephemeral-port-exhaustion-nginx-plus/ As a side note: I'd really encourage all of you to add our blog rss to your feeds. While there is some marketing "noise" we are still trying to make it useful for tech people too. -- Maxim Konovalov From reallfqq-nginx at yahoo.fr Wed Mar 8 13:02:53 2017 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 8 Mar 2017 14:02:53 +0100 Subject: Reverse proxy problem with an application In-Reply-To: <1797319055.5405349.1488836103023@mail.yahoo.com> References: <1797319055.5405349.1488836103023.ref@mail.yahoo.com> <1797319055.5405349.1488836103023@mail.yahoo.com> Message-ID: This clearly looks like an application problem and not a nginx-related one. nginx does not remove cookies nor, as the configuration snippet you shared suggest, handles authentication. If you use DNS, make sure all requests are served by the instance of nginx you quote, including redirects which might happen on login (have a look at access logs). You can also investigate the content of cookies received either from downstream or upstream if you think it is related to your problem. If you got a question on the nginx configuration this ML is here to help. Otherwise, you'll need to rereoute your question where appropriate. --- *B. R.* On Mon, Mar 6, 2017 at 10:35 PM, Mik J via nginx wrote: > Hello, > > I have run an application behind a nginx reverse proxy and I can't make it > to work > > a) if I access this application using https://1.1.1.1:443 > it works (certificate warning) > b) if I access this application using https://myapp.mydomain.org, I get > access to the login page > location ^~ / { > proxy_pass https://1.1.1.1:443; > proxy_redirect off; > proxy_set_header Host $http_host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_hide_header X-Frame-Options; > proxy_hide_header X-Content-Security-Policy; > proxy_hide_header X-Content-Type-Options; > proxy_hide_header X-WebKit-CSP; > proxy_hide_header content-security-policy; > proxy_hide_header x-xss-protection; > proxy_set_header X-NginX-Proxy true; > proxy_ssl_session_reuse off; > } > c) I log in in the page and after some time (2/3 seconds) the application > logs me out > > When I log in directly case a) I notice that I have (firebug) > CookieSaveStateCookie=root; APPSESSIONID=070ABC6AE433D2CAEDCFFB1E43074416; > testcookieenabled > > Whereas when I log in in case c) I have > APPSESSIONID=070ABC6AE433D2CAEDCFFB1E43074416; testcookieenabled > > > So I feel there's a problem with the session or something like that. > PS: There is only one backend server and I can't run plain http (disable > https) > > Does anyone has an idea ? > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Mar 8 13:09:00 2017 From: nginx-forum at forum.nginx.org (bongtv) Date: Wed, 08 Mar 2017 08:09:00 -0500 Subject: MP4 progressive download needs hint tracks in MP4 file? Message-ID: <3e1d4c21cc9cfc21c48cf73af476e8d8.NginxMailingListEnglish@forum.nginx.org> I'm wondering if RTP hint tracks in MP4 files are necessary to provide progressive download functionality over my NGINX server configuration. I can't remember where but I read somewhere that these additional tracks support the server by seeking operations (byte-range). Thanks for your feedback, Hannes Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272831,272831#msg-272831 From mikydevel at yahoo.fr Wed Mar 8 14:06:17 2017 From: mikydevel at yahoo.fr (Mik J) Date: Wed, 8 Mar 2017 14:06:17 +0000 (UTC) Subject: Reverse proxy problem with an application In-Reply-To: References: <1797319055.5405349.1488836103023.ref@mail.yahoo.com> <1797319055.5405349.1488836103023@mail.yahoo.com> Message-ID: <1343931890.726349.1488981977764@mail.yahoo.com> Hello BR,Thank you for your answer and for the hints. I'll investigate further in that direction.Have a nice week Le Mercredi 8 mars 2017 14h03, B.R. via nginx a ?crit : This clearly looks like an application problem and not a nginx-related one. nginx does not remove cookies nor, as the configuration snippet you shared suggest, handles authentication. If you use DNS, make sure all requests are served by the instance of nginx you quote, including redirects which might happen on login (have a look at access logs). You can also investigate the content of cookies received either from downstream or upstream if you think it is related to your problem. If you got a question on the nginx configuration this ML is here to help. Otherwise, you'll need to rereoute your question where appropriate. --- B. R. On Mon, Mar 6, 2017 at 10:35 PM, Mik J via nginx wrote: Hello, I have run an application behind a nginx reverse proxy and I can't make it to work a) if I access this application using https://1.1.1.1:443 it works (certificate warning)b) if I access this application using https://myapp.mydomain.org, I get access to the login page??? location ^~ / { ??????? proxy_pass??????? https://1.1.1.1:443; ??????? proxy_redirect??? off; ??????? proxy_set_header? Host???????????? $http_host; ??????? proxy_set_header? X-Real-IP??????? $remote_addr; ??????? proxy_set_header? X-Forwarded-For? $proxy_add_x_forwarded_for; ??????? proxy_hide_header X-Frame-Options;??????? proxy_hide_header X-Content-Security-Policy; ??????? proxy_hide_header X-Content-Type-Options; ??????? proxy_hide_header X-WebKit-CSP; ??????? proxy_hide_header content-security-policy; ??????? proxy_hide_header x-xss-protection; ??????? proxy_set_header? X-NginX-Proxy true; ??????? proxy_ssl_session_reuse off; ??? } c) I log in in the page and after some time (2/3 seconds) the application logs me out When I log in directly case a) I notice that I have (firebug) CookieSaveStateCookie=root; APPSESSIONID= 070ABC6AE433D2CAEDCFFB1E430744 16; testcookieenabled Whereas when I log in in case c) I haveAPPSESSIONID= 070ABC6AE433D2CAEDCFFB1E430744 16; testcookieenabled So I feel there's a problem with the session or something like that.PS: There is only one backend server and I can't run plain http (disable https) Does anyone has an idea ? ______________________________ _________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/ mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Mar 8 14:40:05 2017 From: nginx-forum at forum.nginx.org (shashik) Date: Wed, 08 Mar 2017 09:40:05 -0500 Subject: mp4 recording using nginx rtmp module Message-ID: <0b70f0e08dae9249273edf252e6fae09.NginxMailingListEnglish@forum.nginx.org> We are using wowza streaming engine to record Live TV shows which gives recorded output in the mp4 format. We are evaluating Nginx RTMP module to do same mp4 recording. However it is observed that this module does recording only in the flv format. Is there any way to do direct recording of live stream in the mp4 format? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272833,272833#msg-272833 From tolga.ceylan at gmail.com Wed Mar 8 18:45:54 2017 From: tolga.ceylan at gmail.com (Tolga Ceylan) Date: Wed, 8 Mar 2017 10:45:54 -0800 Subject: Reverse Proxy with 500k connections In-Reply-To: References: Message-ID: is IP_BIND_ADDRESS_NO_PORT the best solution for OP's case? Unlike the blog post with two backends, OP's case has one backend server. If any of the hash slots exceed the 65K port limit, there's no chance to recover. Despite having enough port capacity, the client will receive an error if the client ip/port hashed to a full slot. IMHO picking bind IP based on a client ip/port hash is not very robust in this case since you can't really make sure you really are directing %10 of the traffic. This solution does not consider long connections (web sockets) and the hash slot could get out of balance over time. On Wed, Mar 8, 2017 at 3:20 AM, Maxim Konovalov wrote: > On 3/7/17 10:50 PM, larsg wrote: >> Hi, >> >> we are operating native nginx 1.8.1 on RHEL as a reverse proxy. >> The nginx routes requests to a backend server that can be reached from the >> proxy via a single internal IP address. >> We have to support a large number of concurrent websocket connections - say >> 100k to 500k. >> >> As we don't want to increase the number of proxy instances (with different >> IPs) and we cannot use the "proxy_bind transarent" option (was introduced in >> a later nginx release, upgrade is not possible) we wanted to configure the >> nginx to use different source IPs then routing to the backend. Thus, we want >> nginx to select an available source ip + source port when a connection is >> established with the backend. >> >> For that we assigned ten internal IPs to the proxy server and used the >> proxy_bind directive bound to 0.0.0.0. >> But this approach seems not to work. The nginx instance seems always use the >> first IP as source IP. >> Using multiple proxy_bind's is not possible. >> >> So my question is: How can I configure nginx to select from a pool of source >> IPs? Or generally: to overcome the 64k problem? >> > We ever wrote a blog post for you! > > https://www.nginx.com/blog/overcoming-ephemeral-port-exhaustion-nginx-plus/ > > As a side note: I'd really encourage all of you to add our blog rss > to your feeds. While there is some marketing "noise" we are still > trying to make it useful for tech people too. > > -- > Maxim Konovalov > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From francis at daoine.org Wed Mar 8 20:03:06 2017 From: francis at daoine.org (Francis Daly) Date: Wed, 8 Mar 2017 20:03:06 +0000 Subject: Nginx Map how to check value if empty In-Reply-To: <6bca319b739893fcbc4734247e82370f.NginxMailingListEnglish@forum.nginx.org> References: <20170307212359.GE15209@daoine.org> <6bca319b739893fcbc4734247e82370f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170308200306.GG15209@daoine.org> On Tue, Mar 07, 2017 at 06:44:05PM -0500, c0nw0nk wrote: Hi there, > I was just looking at the realip module but that module does not seem to > support fallback methods like I demonstrated I was in need of. I'm not convinced that you need anything other than what the realip module provides; but it's your system and you can configure it however you want. If you can (temporarily) add a second access log file which uses a format like: remote_addr is $remote_addr and http_x_forwarded_for is $http_x_forwarded_for and http_cf_connecting_ip is $http_cf_connecting_ip then you should be able to show one line which has $remote_addr being a trusted address and the "end" bit of $http_x_forwarded_for not being exactly what you want to use. > (If it does > support multiple headers and fallback conditions can someone provide a > demonstration) It doesn't; so if you need that, then you must build your own work-alike (like you've done); or write or encourage some to write the code to improve the realip module for this use case. f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Mar 8 20:17:13 2017 From: francis at daoine.org (Francis Daly) Date: Wed, 8 Mar 2017 20:17:13 +0000 Subject: Nginx Map how to check value if empty In-Reply-To: References: <20170307212359.GE15209@daoine.org> <6bca319b739893fcbc4734247e82370f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170308201713.GH15209@daoine.org> On Wed, Mar 08, 2017 at 01:56:04AM -0500, c0nw0nk wrote: Hi there, > The usage of the final output is as easy as this. "$client_ip_output;" > limit_req_zone $client_ip_output zone=one:10m rate=1r/s; #usage example for > the resulting output after all fallback checks and ip whitelist checks etc. > > Based of what I showed here can anyone point out any problems they see with > it etc from what I want to achieve I think it should work fine. How many requests should I make with a really long (but different every time) X-Forwarded-For header before your zone fills? Cheers, f -- Francis Daly francis at daoine.org From tecnologiaterabyte at gmail.com Wed Mar 8 20:57:18 2017 From: tecnologiaterabyte at gmail.com (Wilmer Arambula) Date: Wed, 8 Mar 2017 16:57:18 -0400 Subject: virtual host conf. Message-ID: I'm new using nginx, I see the difference in performance, my question is the following I have the following virtual host: example.conf [example.conf] # create new server { listen 80; server_name www.example.com; location / { root /home/example/public_html; fastcgi_pass 127.0.0.1:7000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $request_filename; include /etc/nginx/fastcgi_params; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_buffer_size 128k; fastcgi_buffers 256 4k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; index index.php index.html index.htm; } location /phpMyAdmin { root /usr/share/; index index.php index.html index.htm; location ~ ^/phpMyAdmin/(.+\.php)$ { try_files $uri =404; root /usr/share/; fastcgi_pass 127.0.0.1:7000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $request_filename; include /etc/nginx/fastcgi_params; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_buffer_size 128k; fastcgi_buffers 256 4k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; } location ~* ^/phpMyAdmin/(.+\.(jpg|jpeg|gif|css|png|js|ico|html|xml|txt))$ { root /usr/share/; } } } As I can add the alias to phpMyAdmin, what would be the best way to do that, another option would be convenient to add, I have installed nginx + php-fpm + spawn-fcgi, Thks, -- *Wilmer Arambula.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From he.hailong5 at zte.com.cn Thu Mar 9 03:37:50 2017 From: he.hailong5 at zte.com.cn (he.hailong5 at zte.com.cn) Date: Thu, 9 Mar 2017 11:37:50 +0800 (CST) Subject: unbearable reload downtime when introducing a new server block Message-ID: <201703091137506771142@zte.com.cn> SGksDQoNCg0KDQoNCg0KSW4gbXkgY2FzZSwgSSBuZWVkIHRvIGFkZCBuZXcgc2VydmVyIGJsb2Nr IGZyb20gdGltZSB0byB0aW1lIGluIG5naW54LiBhZnRlciByZWxvYWRpbmcsIGl0IGlzIG9ic2Vy dmVkIHRoYXQgdGhlIG5ld2x5IGFkZGVkIHNlcnZlciBpcyBub3QgYXZhaWxhYmxlIGZvciBzZWNv bmRzLiBpcyB0aGlzIGFzIGV4cGVjdGVkPyBob3cgdG8gaW1wcm92ZSBpdD8NCg0KDQoNCg0KQlIs DQoNCkpvZQ== -------------- next part -------------- An HTML attachment was scrubbed... URL: From anoopalias01 at gmail.com Thu Mar 9 05:31:38 2017 From: anoopalias01 at gmail.com (Anoop Alias) Date: Thu, 9 Mar 2017 11:01:38 +0530 Subject: map directive doubt Message-ID: Hi, Just have a doubt in map directive map $http_user_agent $upstreamname { default desktop; ~(iPhone|Android) mobile; } is correct ? ######################## or does the regex need to fully match the variable? map $http_user_agent $upstreamname { default desktop; ~*.*Android.* mobile; ~*.*iPhone.* mobile; } -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From he.hailong5 at zte.com.cn Thu Mar 9 05:50:16 2017 From: he.hailong5 at zte.com.cn (he.hailong5 at zte.com.cn) Date: Thu, 9 Mar 2017 13:50:16 +0800 (CST) Subject: unbearable reload downtime when introducing a new server block References: 201703091132355270899@zte.com.cn Message-ID: <201703091350162254282@zte.com.cn> SGksDQoNCg0KDQoNCg0KSW4gbXkgY2FzZSwgSSBuZWVkIHRvIGFkZCBuZXcgc2VydmVyIGJsb2Nr IGZyb20gdGltZSB0byB0aW1lIGluIG5naW54LiBhZnRlciByZWxvYWRpbmcsIGl0IGlzIG9ic2Vy dmVkIHRoYXQgdGhlIG5ld2x5IGFkZGVkIHNlcnZlciBpcyBub3QgYXZhaWxhYmxlIGZvciBzZWNv bmRzLiBpcyB0aGlzIGFzIGV4cGVjdGVkPyBob3cgdG8gaW1wcm92ZSBpdD8NCg0KDQoNCg0KQlIs DQoNCkpvZQ== -------------- next part -------------- An HTML attachment was scrubbed... URL: From anoopalias01 at gmail.com Thu Mar 9 07:01:55 2017 From: anoopalias01 at gmail.com (Anoop Alias) Date: Thu, 9 Mar 2017 12:31:55 +0530 Subject: combining map Message-ID: Hi, I have 3 maps defined ############################ map $request_method $requestnocache { default 0; POST 1; } map $query_string $querystringnc { default 1; "" 0; } map $http_cookie $mccookienocache { default 0; _mcnc 1; } ############################### I need to create a single variable that is 1 if either of the 3 above is 1 and 0 if all are 0. Will the following be enough map "$requestnocache$querystringnc$mccookienocache" { default 0; ~1 1; } Thanks, -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From iippolitov at nginx.com Thu Mar 9 08:09:17 2017 From: iippolitov at nginx.com (Igor A. Ippolitov) Date: Thu, 9 Mar 2017 11:09:17 +0300 Subject: combining map In-Reply-To: References: Message-ID: <99bb8a6c-9626-b188-b6d7-b7a19390f965@nginx.com> If you are going to use it inside proxy_no_cache directive, you can combine proxy_cache_method (POST is not included by default) and 'proxy_no_cache $query_string$cookie__mcnc' The latter will not cache the request until there is query string or a cookie with a value set. So basically, it looks like you can avoid using maps in this case. On 09.03.2017 10:01, Anoop Alias wrote: > Hi, > > I have 3 maps defined > ############################ > map $request_method $requestnocache { > default 0; > POST 1; > } > > map $query_string $querystringnc { > default 1; > "" 0; > } > > map $http_cookie $mccookienocache { > default 0; > _mcnc 1; > } > ############################### > > I need to create a single variable that is 1 if either of the 3 above > is 1 and 0 if all are 0. Will the following be enough > > map "$requestnocache$querystringnc$mccookienocache" { > default 0; > ~1 1; > } > > > > Thanks, > -- > *Anoop P Alias* > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From anoopalias01 at gmail.com Thu Mar 9 08:35:20 2017 From: anoopalias01 at gmail.com (Anoop Alias) Date: Thu, 9 Mar 2017 14:05:20 +0530 Subject: combining map In-Reply-To: <99bb8a6c-9626-b188-b6d7-b7a19390f965@nginx.com> References: <99bb8a6c-9626-b188-b6d7-b7a19390f965@nginx.com> Message-ID: Hi Igor, I need to use this with ################## srcache_fetch_skip $skip_cache; srcache_store_skip $skip_cache; ################## As per srcache docs the value must be 0 for not skipping and anything other than 0 will be considered for skipping Will combining the variables work here too? Thanks, On Thu, Mar 9, 2017 at 1:39 PM, Igor A. Ippolitov wrote: > If you are going to use it inside proxy_no_cache directive, you can > combine proxy_cache_method (POST is not included by default) and > 'proxy_no_cache $query_string$cookie__mcnc' > The latter will not cache the request until there is query string or a > cookie with a value set. > So basically, it looks like you can avoid using maps in this case. > > > On 09.03.2017 10:01, Anoop Alias wrote: > > Hi, > > I have 3 maps defined > ############################ > map $request_method $requestnocache { > default 0; > POST 1; > } > > map $query_string $querystringnc { > default 1; > "" 0; > } > > map $http_cookie $mccookienocache { > default 0; > _mcnc 1; > } > ############################### > > I need to create a single variable that is 1 if either of the 3 above is 1 > and 0 if all are 0. Will the following be enough > > map "$requestnocache$querystringnc$mccookienocache" { > default 0; > ~1 1; > } > > > > Thanks, > -- > *Anoop P Alias* > > > > _______________________________________________ > nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From iippolitov at nginx.com Thu Mar 9 08:53:29 2017 From: iippolitov at nginx.com (Igor A. Ippolitov) Date: Thu, 9 Mar 2017 11:53:29 +0300 Subject: combining map In-Reply-To: References: <99bb8a6c-9626-b188-b6d7-b7a19390f965@nginx.com> Message-ID: <035b8498-9e3c-8f26-7731-0cdafef17094@nginx.com> No, it won't. You can try something like map $request_method $requestnocache { default ""; POST whatever; } map $requestnocache$query_string$cookie__mcnc $skip_cache { default 0; ~. 1; } So basically, "don't skip by default, but skip, if there are any letters" You can test this by extending your log format with variables used/produced by the map and comparing results to what you expect there to be. On 09.03.2017 11:35, Anoop Alias wrote: > Hi Igor, > > I need to use this with > > ################## > srcache_fetch_skip $skip_cache; > srcache_store_skip $skip_cache; > ################## > > As per srcache docs the value must be 0 for not skipping and anything > other than 0 will be considered for skipping > > Will combining the variables work here too? > > Thanks, > > On Thu, Mar 9, 2017 at 1:39 PM, Igor A. Ippolitov > > wrote: > > If you are going to use it inside proxy_no_cache directive, you > can combine proxy_cache_method (POST is not included by default) > and 'proxy_no_cache $query_string$cookie__mcnc' > The latter will not cache the request until there is query string > or a cookie with a value set. > So basically, it looks like you can avoid using maps in this case. > > > On 09.03.2017 10:01, Anoop Alias wrote: >> Hi, >> >> I have 3 maps defined >> ############################ >> map $request_method $requestnocache { >> default 0; >> POST 1; >> } >> >> map $query_string $querystringnc { >> default 1; >> "" 0; >> } >> >> map $http_cookie $mccookienocache { >> default 0; >> _mcnc 1; >> } >> ############################### >> >> I need to create a single variable that is 1 if either of the 3 >> above is 1 and 0 if all are 0. Will the following be enough >> >> map "$requestnocache$querystringnc$mccookienocache" { >> default 0; >> ~1 1; >> } >> >> >> >> Thanks, >> -- >> *Anoop P Alias* >> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > _______________________________________________ nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > -- > *Anoop P Alias* > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxim at nginx.com Thu Mar 9 09:35:18 2017 From: maxim at nginx.com (Maxim Konovalov) Date: Thu, 9 Mar 2017 12:35:18 +0300 Subject: Reverse Proxy with 500k connections In-Reply-To: References: Message-ID: This is just a matter of number of ip addresses you have in a proxy_bind pool and suitable hash function for the split_clients map. Adding additional logic to proxy_bind ip address selection you still can face the same problem. On 3/8/17 9:45 PM, Tolga Ceylan wrote: > is IP_BIND_ADDRESS_NO_PORT the best solution for OP's case? Unlike the > blog post with two backends, OP's case has one backend server. If any > of the hash slots exceed the 65K port limit, there's no chance to > recover. Despite having enough port capacity, the client will receive > an error if the client ip/port hashed to a full slot. > > IMHO picking bind IP based on a client ip/port hash is not very robust > in this case since > you can't really make sure you really are directing %10 of the > traffic. This solution does > not consider long connections (web sockets) and the hash slot could > get out of balance > over time. > > > On Wed, Mar 8, 2017 at 3:20 AM, Maxim Konovalov wrote: >> On 3/7/17 10:50 PM, larsg wrote: >>> Hi, >>> >>> we are operating native nginx 1.8.1 on RHEL as a reverse proxy. >>> The nginx routes requests to a backend server that can be reached from the >>> proxy via a single internal IP address. >>> We have to support a large number of concurrent websocket connections - say >>> 100k to 500k. >>> >>> As we don't want to increase the number of proxy instances (with different >>> IPs) and we cannot use the "proxy_bind transarent" option (was introduced in >>> a later nginx release, upgrade is not possible) we wanted to configure the >>> nginx to use different source IPs then routing to the backend. Thus, we want >>> nginx to select an available source ip + source port when a connection is >>> established with the backend. >>> >>> For that we assigned ten internal IPs to the proxy server and used the >>> proxy_bind directive bound to 0.0.0.0. >>> But this approach seems not to work. The nginx instance seems always use the >>> first IP as source IP. >>> Using multiple proxy_bind's is not possible. >>> >>> So my question is: How can I configure nginx to select from a pool of source >>> IPs? Or generally: to overcome the 64k problem? >>> >> We ever wrote a blog post for you! >> >> https://www.nginx.com/blog/overcoming-ephemeral-port-exhaustion-nginx-plus/ >> >> As a side note: I'd really encourage all of you to add our blog rss >> to your feeds. While there is some marketing "noise" we are still >> trying to make it useful for tech people too. >> >> -- >> Maxim Konovalov >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Maxim Konovalov From ru at nginx.com Thu Mar 9 09:53:50 2017 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 9 Mar 2017 12:53:50 +0300 Subject: combining map In-Reply-To: References: <99bb8a6c-9626-b188-b6d7-b7a19390f965@nginx.com> Message-ID: <20170309095350.GA50187@lo0.su> On Thu, Mar 09, 2017 at 02:05:20PM +0530, Anoop Alias wrote: > Hi Igor, > > I need to use this with > > ################## > srcache_fetch_skip $skip_cache; > srcache_store_skip $skip_cache; > ################## > > As per srcache docs the value must be 0 for not skipping and anything other > than 0 will be considered for skipping The value can also be an empty string for not skipping. > Will combining the variables work here too? You can have three maps as you initially suggested but with an empty string as a default value (which is the default for "map"). You can then combine three of them in the srcache_*_skip directive. > Thanks, > > On Thu, Mar 9, 2017 at 1:39 PM, Igor A. Ippolitov > wrote: > > > If you are going to use it inside proxy_no_cache directive, you can > > combine proxy_cache_method (POST is not included by default) and > > 'proxy_no_cache $query_string$cookie__mcnc' > > The latter will not cache the request until there is query string or a > > cookie with a value set. > > So basically, it looks like you can avoid using maps in this case. > > > > > > On 09.03.2017 10:01, Anoop Alias wrote: > > > > Hi, > > > > I have 3 maps defined > > ############################ > > map $request_method $requestnocache { > > default 0; > > POST 1; > > } > > > > map $query_string $querystringnc { > > default 1; > > "" 0; > > } > > > > map $http_cookie $mccookienocache { > > default 0; > > _mcnc 1; > > } > > ############################### > > > > I need to create a single variable that is 1 if either of the 3 above is 1 > > and 0 if all are 0. Will the following be enough > > > > map "$requestnocache$querystringnc$mccookienocache" { > > default 0; > > ~1 1; > > } > > > > > > > > Thanks, > > -- > > *Anoop P Alias* > > > > > > > > _______________________________________________ > > nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > -- > *Anoop P Alias* > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Ruslan Ermilov Assume stupidity not malice From yongtao_you at yahoo.com Thu Mar 9 12:58:23 2017 From: yongtao_you at yahoo.com (Yongtao You) Date: Thu, 9 Mar 2017 12:58:23 +0000 (UTC) Subject: Conflict between form-input-nginx-module and nginx-auth-request-module? References: <1719659735.1981960.1489064303554.ref@mail.yahoo.com> Message-ID: <1719659735.1981960.1489064303554@mail.yahoo.com> Hi, I was able to get both form-input-nginx-module and nginx-auth-request-module work fine, individually, with nginx-1.9.9. Putting them together, and the HTTP POST request just timeout (I never got any response back). Any one had any experience using both of them in the same location block? Thanks.Yongtao -------------- next part -------------- An HTML attachment was scrubbed... URL: From efeldhusen.lists at gmail.com Thu Mar 9 13:08:22 2017 From: efeldhusen.lists at gmail.com (Eric Feldhusen) Date: Thu, 9 Mar 2017 08:08:22 -0500 Subject: Nginx reverse proxy for TFTP UDP port 69 traffic In-Reply-To: <2fd0e5d5-bd51-cff5-64e9-fdb254f519ed@nginx.com> References: <04D23F0F-B586-42A5-8C9E-AFA77F5ECEBB@gmail.com> <2fd0e5d5-bd51-cff5-64e9-fdb254f519ed@nginx.com> Message-ID: On Mar 7, 2017, at 4:58 PM, Vladimir Homutov wrote: On 08.03.2017 00:21, Eric Feldhusen wrote: I?m trying to use Nginx to reverse proxy TFTP UDP port 69 traffic and I?m having a problem with getting files through the nginx reverse proxy. My configuration is simple, I?m running TFTP on one Centos 6.x server and the Nginx reserve proxy on another Centos 6.x server with the latest Nginx mainline 1.11.10 from the nginx.org repository. TFTP connections to the TFTP server directly work. Using the same commands through the Nginx reverse proxy, connects, but will not download or upload a file through it. If you have any suggestions, I?d appreciate a nudge in the right direction. I?m assuming it?s something I?m missing. Eric Feldhusen Unfortunately, TFTP will not work, because it requires that after initial server's reply client will send packets to the port, chosen by server (i.e. not 69. but some auto-assigned). also, TFTP server recognizes clients by its source port and it changes when a packet passes proxy - each packet is originating from a new source port on proxy. Ah, I had just started to look up specifically how TFTP connections work, so I hadn?t seen this yet. But that makes sense with what I was seeing. Thank you for the quick reply, I appreciate it. Eric Feldhusen -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Mar 9 14:03:49 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 9 Mar 2017 17:03:49 +0300 Subject: Passing $upstream_response_time in a header In-Reply-To: References: Message-ID: <20170309140349.GO23126@mdounin.ru> Hello! On Tue, Mar 07, 2017 at 04:38:04PM -0500, Jonathan Simowitz via nginx wrote: > Hello, > > I have an nginx server that runs as reverse proxy and I would like to pass > the $upstream_response_time value in a header. I find that when I do the > value is actually a linux timestamp with millisecond resolution instead of > a value of seconds with millisecond resolution. Apparently this is > automatically converted when written to the logs. Is there a way to trigger > the conversion for passing in a header? The $upstream_response_time variable is the full time of receiving the response from the upstream server. When sending a response header full time is not yet known, and the variable will contain garbage. If you want to return something known when sending a response header, use $upstream_header_time instead. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Thu Mar 9 14:05:31 2017 From: nginx-forum at forum.nginx.org (maznislav) Date: Thu, 09 Mar 2017 09:05:31 -0500 Subject: Fastcgi_cache permissions Message-ID: Hello, I was searching for an answer for this question quite a bit, but unfortunately I was not able to find such, so any help is much appreciated. The issue is the following - I have enabled Fastcgi_cache for my server and I have noticed that the cache has very restricted permissions 700 to be precise. I need to be able to change those permissions, but unfortunately I am not able to do so. I do not see any configuration variable that is responsible for this, neither the nginx process uses the umask value set for generating the permissions for those files. If someone has an idea how can I make nginx to use custom permissions for the cache that would great. Thanks a lot. Regards. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272853,272853#msg-272853 From nginx-forum at forum.nginx.org Thu Mar 9 14:52:13 2017 From: nginx-forum at forum.nginx.org (larsg) Date: Thu, 09 Mar 2017 09:52:13 -0500 Subject: Reverse Proxy with 500k connections In-Reply-To: References: Message-ID: <2cb35a797b5eeef506c4511fd751b78e.NginxMailingListEnglish@forum.nginx.org> Thanks for the advice. I implemented this approach. Unfortunately not with 100% success. When enabling sysctl option "net.ipv4.ip_nonlocal_bind = 1" it is possible to use local IP addresses (192.168.1.130-139) as proxy_bind address. But than using such an address (other than 0.0.0.0), nginx will produce an error message. Interesting aspect is: attribute "server" in the log entry is empty. When using 0.0.0.0 as proxy_bind, everything is fine. Do you have any ideas? 2017/03/09 14:27:09 [crit] 69765#0: *478633 connect() to 192.168.1.21:443 failed (22: Invalid argument) while connecting to upstream, client: x.x.x.x, server: , request: "GET /myservice HTTP/1.1", upstream: "https://192.168.1.21:443/myservice", host: "xxxxxxx:44301" split_clients "${remote_addr}AAAA" $proxy_ip { # does not work 100% 192.168.1.130; # works 100% 0.0.0.0; } server { listen 44301 ssl backlog=163840; #works #proxy_bind 0.0.0.0; #does not work #proxy_bind 192.168.1.130; proxy_bind $proxy_ip; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272808,272854#msg-272854 From maxozerov at i-free.com Thu Mar 9 14:58:38 2017 From: maxozerov at i-free.com (Maxim Ozerov) Date: Thu, 9 Mar 2017 14:58:38 +0000 Subject: Fastcgi_cache permissions In-Reply-To: References: Message-ID: And what is the purpose of changing permissions? In other words - nginx.conf - user www-data; So for example, Directories Access: (0700/drwx) with Uid: (33/www-data) And no one forbids you to access the cache from this user for manipulating files (allow other process (for example php-fpm runs as www-data) to delete Nginx cache files). -----Original Message----- From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of maznislav Sent: Thursday, March 9, 2017 5:06 PM To: nginx at nginx.org Subject: Fastcgi_cache permissions Hello, I was searching for an answer for this question quite a bit, but unfortunately I was not able to find such, so any help is much appreciated. The issue is the following - I have enabled Fastcgi_cache for my server and I have noticed that the cache has very restricted permissions 700 to be precise. I need to be able to change those permissions, but unfortunately I am not able to do so. I do not see any configuration variable that is responsible for this, neither the nginx process uses the umask value set for generating the permissions for those files. If someone has an idea how can I make nginx to use custom permissions for the cache that would great. Thanks a lot. Regards. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272853,272853#msg-272853 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Thu Mar 9 15:57:10 2017 From: nginx-forum at forum.nginx.org (maznislav) Date: Thu, 09 Mar 2017 10:57:10 -0500 Subject: Fastcgi_cache permissions In-Reply-To: References: Message-ID: <874cc1ef738cfd3df9cb504c054d7f15.NginxMailingListEnglish@forum.nginx.org> Hi Maxim, thanks for the reply. The use case that I have is when php-fpm is running as a user different than the nginx one. In this case the permissions being set as 0700 basically deny any manipulation of the cached files from php scripts. Everytime you try something like this you get permission denied. A similar scenario and possible solution which unfortunately doesn't work is described in this thread. https://github.com/rtCamp/nginx-helper/issues/63 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272853,272856#msg-272856 From r at roze.lv Thu Mar 9 17:08:52 2017 From: r at roze.lv (Reinis Rozitis) Date: Thu, 9 Mar 2017 19:08:52 +0200 Subject: Reverse Proxy with 500k connections In-Reply-To: <2cb35a797b5eeef506c4511fd751b78e.NginxMailingListEnglish@forum.nginx.org> References: <2cb35a797b5eeef506c4511fd751b78e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <000001d298f7$d3c87700$7b596500$@roze.lv> > When enabling sysctl option "net.ipv4.ip_nonlocal_bind = 1" it is possible > to use local IP addresses (192.168.1.130-139) as proxy_bind address. > But than using such an address (other than 0.0.0.0), nginx will produce an > error message. Do the 192.168.1.130-139 IPs actually exist and are configured on the server? While you can bind to the IP it doesn't mean you can make an actual tcp connection to the upstream. net.ipv4.ip_nonlocal_bind is usually used when there is a need for a service to listen to a specific interface which doesn't exist yet on the server like in case of VRRP / Keepalived balancing etc. rr From nginx-forum at forum.nginx.org Thu Mar 9 17:20:22 2017 From: nginx-forum at forum.nginx.org (larsg) Date: Thu, 09 Mar 2017 12:20:22 -0500 Subject: Reverse Proxy with 500k connections In-Reply-To: <2cb35a797b5eeef506c4511fd751b78e.NginxMailingListEnglish@forum.nginx.org> References: <2cb35a797b5eeef506c4511fd751b78e.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi everybody, ok, I recognized another linux network problem that I solved now. Situation now is like following: When I call my upstream address via curl (on the nginx host) by selecting the corresponding local interface (eth0-9 = 192.168.1.130-139) everything is fine. curl https://192.168.1.21:443/remote/events --insecure --interface eth0 But when I specify the same IP address that refers to eth0 (eth1-9 etc.) I get an "110: Connection timed out". Does anybody know this situation? Checked by sysctl config but it looks fine... split_clients "${remote_addr}${remote_port}AAAA" $proxy_ip { 100% 192.168.1.130; } server { listen 44301 ssl backlog=163840; proxy_bind $proxy_ip; #proxy_bind 192.168.1.130; .. 2017/03/09 16:54:33 [error] 30081#0: *11 upstream timed out (110: Connection timed out) while connecting to upstream, client: x.x.x.x, server: , request: "GET /remote/events HTTP/1.1", upstream: "https://192.168.1.21:443/remote/events", host: "xxxxx:44301" Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272808,272858#msg-272858 From r at roze.lv Thu Mar 9 17:24:21 2017 From: r at roze.lv (Reinis Rozitis) Date: Thu, 9 Mar 2017 19:24:21 +0200 Subject: Fastcgi_cache permissions In-Reply-To: <874cc1ef738cfd3df9cb504c054d7f15.NginxMailingListEnglish@forum.nginx.org> References: <874cc1ef738cfd3df9cb504c054d7f15.NginxMailingListEnglish@forum.nginx.org> Message-ID: <000501d298f9$fda4cb80$f8ee6280$@roze.lv> > thanks for the reply. The use case that I have is when php-fpm is running as a > user different than the nginx one. In this case the permissions being set as 0700 > basically deny any manipulation of the cached files from php scripts. Everytime > you try something like this you get permission denied. Why would you manipulate nginx cache files from php directly (or even if you do so why not run the nginx and phpfpm under same user then)? If you want to purge the request (only valid reason which comes to my mind) you should configure fastcgi_cache_purge ( http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_cache_purge ). The drawback is that's only for the commercial version. As an alternative you could use a third party module http://labs.frickle.com/nginx_ngx_cache_purge/ I'm not 100% sure about the compability with the newest nginx releases but you can contact the author about that (he is also in this list). rr From maxozerov at i-free.com Thu Mar 9 17:33:30 2017 From: maxozerov at i-free.com (Maxim Ozerov) Date: Thu, 9 Mar 2017 17:33:30 +0000 Subject: Fastcgi_cache permissions In-Reply-To: <000501d298f9$fda4cb80$f8ee6280$@roze.lv> References: <874cc1ef738cfd3df9cb504c054d7f15.NginxMailingListEnglish@forum.nginx.org> <000501d298f9$fda4cb80$f8ee6280$@roze.lv> Message-ID: <2f3d26a792844995bae4ac758135086b@srv-exch-mb02.i-free.local> > Why would you manipulate nginx cache files from php directly (or even if you do so why not run the nginx and phpfpm under same user then)? Yeah... For example: with php-fpm you can run each site with its own uid/gid (pool configuration), and with address on which to accept FastCGI requests So, create a new pool file with the right user:group ... and send the specific purge request. -----Original Message----- From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of Reinis Rozitis Sent: Thursday, March 9, 2017 8:24 PM To: nginx at nginx.org Subject: RE: RE: Fastcgi_cache permissions > thanks for the reply. The use case that I have is when php-fpm is > running as a user different than the nginx one. In this case the > permissions being set as 0700 basically deny any manipulation of the > cached files from php scripts. Everytime you try something like this you get permission denied. Why would you manipulate nginx cache files from php directly (or even if you do so why not run the nginx and phpfpm under same user then)? If you want to purge the request (only valid reason which comes to my mind) you should configure fastcgi_cache_purge ( http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_cache_purge ). The drawback is that's only for the commercial version. As an alternative you could use a third party module http://labs.frickle.com/nginx_ngx_cache_purge/ I'm not 100% sure about the compability with the newest nginx releases but you can contact the author about that (he is also in this list). rr _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Thu Mar 9 18:10:33 2017 From: nginx-forum at forum.nginx.org (larsg) Date: Thu, 09 Mar 2017 13:10:33 -0500 Subject: Reverse Proxy with 500k connections In-Reply-To: <000001d298f7$d3c87700$7b596500$@roze.lv> References: <000001d298f7$d3c87700$7b596500$@roze.lv> Message-ID: Hi Reinis, yes, IPs exist: ifconfig eth0: flags=4163 mtu 1500 inet 192.168.1.130 netmask 255.255.255.0 broadcast 192.168.1.255 ether fa:16:3e:1e:ad:da txqueuelen 1000 (Ethernet) ... eth1: flags=4163 mtu 1500 inet 192.168.1.131 netmask 255.255.255.0 broadcast 192.168.1.255 okay, I enabled net.ipv4.ip_nonlocal_bind=1 because nginx complained with an error message "cannot bind/assign address". net.ipv4.ip_nonlocal_bind solved that problem. But now with our other kernel adjustments (Reverse Path Forwarding mode 2 ? Loose mode as defined in RFC 3704) this option does not have any effect. So I disabled net.ipv4.ip_nonlocal_bind again. But same result....upstream timed out... Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272808,272862#msg-272862 From thresh at nginx.com Thu Mar 9 19:25:29 2017 From: thresh at nginx.com (Konstantin Pavlov) Date: Thu, 9 Mar 2017 22:25:29 +0300 Subject: Reverse Proxy with 500k connections In-Reply-To: References: <000001d298f7$d3c87700$7b596500$@roze.lv> Message-ID: On 09/03/2017 21:10, larsg wrote: > Hi Reinis, > > yes, IPs exist: > > ifconfig > eth0: flags=4163 mtu 1500 > inet 192.168.1.130 netmask 255.255.255.0 broadcast 192.168.1.255 > ether fa:16:3e:1e:ad:da txqueuelen 1000 (Ethernet) > ... > eth1: flags=4163 mtu 1500 > inet 192.168.1.131 netmask 255.255.255.0 broadcast 192.168.1.255 Are those addresses reachable outside this particular VM in your Openstack environment? -- Konstantin Pavlov From nginx-forum at forum.nginx.org Thu Mar 9 20:10:13 2017 From: nginx-forum at forum.nginx.org (Vanhels) Date: Thu, 09 Mar 2017 15:10:13 -0500 Subject: configuration nginx server block [virtual host] with Ipv6. Message-ID: Hi, I have installed nginx + php-fpm (php5.4 / php5.6), i'm trying to set everything up for ipv6 in Centos 7.3, install from official nginx repo: [/etc/nginx/nginx.conf]: user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; } [/etc/nginx/conf.d/default.conf]: server { listen [::]:80; server_name localhost; location ~ \.php$ { root html; fastcgi_split_path_info ^(.+\.php)(/.+)$; try_files $uri =404; fastcgi_pass [::]:9056; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; include fastcgi_params; } location / { root /usr/share/nginx/html; index index.php index.html index.htm; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } } [domain1.conf]: # create new server { listen [::]:80; root /home/domain1/public_html; index index.php index.html index.htm; server_name domain1 www.domain1; location / { try_files $uri $uri/ =404; } location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass [::]:9056; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_buffer_size 128k; fastcgi_buffers 256 4k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; } } [subdomain.domain1.conf]: # create new server { listen [::]:80; root /home/domain1/public_html/subdomain; index index.php index.html index.htm; server_name subdomain.domain1 www.subdomain.domain1; location / { try_files $uri $uri/ =404; } location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass [::]:9056; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi_params; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_buffer_size 128k; fastcgi_buffers 256 4k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; } } If in [domain.conf] change to: Listen 80; fastcgi_pass 127.0.0.1:9056; It works perfect, because this behavior I'm doing wrong, thank you in advance for your answers, Wilmer. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272864,272864#msg-272864 From francis at daoine.org Thu Mar 9 22:35:53 2017 From: francis at daoine.org (Francis Daly) Date: Thu, 9 Mar 2017 22:35:53 +0000 Subject: configuration nginx server block [virtual host] with Ipv6. In-Reply-To: References: Message-ID: <20170309223553.GI15209@daoine.org> On Thu, Mar 09, 2017 at 03:10:13PM -0500, Vanhels wrote: Hi there, > Hi, I have installed nginx + php-fpm (php5.4 / php5.6), i'm trying to set > everything up for ipv6 in Centos 7.3, install from official nginx repo: What part fails for you? Does nginx listen on the IPv6 port? Does the client connect to nginx? Does the fastcgi server listen on the IPv6 port? Does nginx connect to the fastcgi server? You suggest that something is bad with > listen [::]:80; > fastcgi_pass [::]:9056; but is good with > Listen 80; > fastcgi_pass 127.0.0.1:9056; One difference there is that your fastcgi_pass in IPv4 connects to an address:port, while in IPv6 it does not. I would guess that using [::1]:9056 might have a chance of helping. But only if you can already fetch a file from nginx on IPv6 when fastcgi is not involved. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Mar 9 22:50:25 2017 From: francis at daoine.org (Francis Daly) Date: Thu, 9 Mar 2017 22:50:25 +0000 Subject: map directive doubt In-Reply-To: References: Message-ID: <20170309225025.GJ15209@daoine.org> On Thu, Mar 09, 2017 at 11:01:38AM +0530, Anoop Alias wrote: Hi there, > Just have a doubt in map directive > > map $http_user_agent $upstreamname { > default desktop; > ~(iPhone|Android) mobile; > } > > is correct ? That doesn't look too hard to test. == server { listen 8880; return 200 "user agent = $http_user_agent; map = $upstreamname\n"; } == $ curl -H User-Agent:xxAndroidxx http://localhost:8880/x f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Thu Mar 9 23:06:05 2017 From: nginx-forum at forum.nginx.org (Vanhels) Date: Thu, 09 Mar 2017 18:06:05 -0500 Subject: configuration nginx server block [virtual host] with Ipv6. In-Reply-To: References: Message-ID: Thanks for your answer, I'll specify it better: Config Work Fine: [domain1.conf]: server { listen [::]:80; root /home/domain1/public_html; index index.php index.html index.htm; server_name domain1 www.domain1; location ~ \.php$ { fastcgi_pass [::]:9056; } } [subdomain.domain1.conf]: server { listen 80; root /home/domain1/public_html/subdomain; index index.php index.html index.htm; server_name subdomain.domain1 www.subdomain.domain1; location ~ \.php$ { fastcgi_pass 127.0.0.1:9056; } } Config No Work : [domain1.conf]: server { listen [::]:80; root /home/domain1/public_html; index index.php index.html index.htm; server_name domain1 www.domain1; location ~ \.php$ { fastcgi_pass [::]:9056; } } [subdomain.domain1.conf]: server { listen [::]:80; root /home/domain1/public_html/subdomain; index index.php index.html index.htm; server_name subdomain.domain1 www.subdomain.domain1; location ~ \.php$ { fastcgi_pass [::]:9056; } } The error happens when Listen and fastcgi_pass have the same port address in both domains, Thks, Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272864,272867#msg-272867 From nginx-forum at forum.nginx.org Fri Mar 10 09:04:32 2017 From: nginx-forum at forum.nginx.org (javiii) Date: Fri, 10 Mar 2017 04:04:32 -0500 Subject: proxy_cache_use_stale based on IP address Message-ID: <413343029e3987ed294aae426a4cbf2a.NginxMailingListEnglish@forum.nginx.org> Hi! Is it possible to send a stale version of the website based on the IP address of the client? Thank you in advance Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272868,272868#msg-272868 From ranshalit at gmail.com Fri Mar 10 17:01:03 2017 From: ranshalit at gmail.com (Ran Shalit) Date: Fri, 10 Mar 2017 19:01:03 +0200 Subject: upload xml file Message-ID: Hello, I am new with web servers and nginx. I would like to ask if nginx support xml , and what does it mean to upload xml to web server ? Does it just keep the xml as file in some directory , or does it do parse the xml file and do some actions ? Thank you, Ran From jeff.dyke at gmail.com Fri Mar 10 22:19:19 2017 From: jeff.dyke at gmail.com (Jeff Dyke) Date: Fri, 10 Mar 2017 17:19:19 -0500 Subject: upload xml file In-Reply-To: References: Message-ID: what do you want it to do? if you're talking nginx without any application backend you could do a lot with some lua locations, or you're going to pass that request to another process, or serve a static (xml) file from the file system. Nginx does support XML just fine, its all a matter of what you want your application to do. On Fri, Mar 10, 2017 at 12:01 PM, Ran Shalit wrote: > Hello, > > I am new with web servers and nginx. > I would like to ask if nginx support xml , and what does it mean to > upload xml to web server ? > Does it just keep the xml as file in some directory , or does it do > parse the xml file and do some actions ? > > Thank you, > Ran > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sat Mar 11 01:00:11 2017 From: francis at daoine.org (Francis Daly) Date: Sat, 11 Mar 2017 01:00:11 +0000 Subject: configuration nginx server block [virtual host] with Ipv6. In-Reply-To: References: Message-ID: <20170311010011.GK15209@daoine.org> On Thu, Mar 09, 2017 at 06:06:05PM -0500, Vanhels wrote: Hi there, > The error happens when Listen and fastcgi_pass have the same port address in > both domains, What's the error? What do you do / what do you see / what do you want to see instead? f -- Francis Daly francis at daoine.org From tecnologiaterabyte at gmail.com Sat Mar 11 01:06:03 2017 From: tecnologiaterabyte at gmail.com (Wilmer Arambula) Date: Fri, 10 Mar 2017 21:06:03 -0400 Subject: configuration nginx server block [virtual host] with Ipv6. In-Reply-To: <20170311010011.GK15209@daoine.org> References: <20170311010011.GK15209@daoine.org> Message-ID: The websites pages do not load, not open, and write anything in te log, Thks, Wilmer. El 10/3/2017 21:00, "Francis Daly" escribi?: On Thu, Mar 09, 2017 at 06:06:05PM -0500, Vanhels wrote: Hi there, > The error happens when Listen and fastcgi_pass have the same port address in > both domains, What's the error? What do you do / what do you see / what do you want to see instead? f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sat Mar 11 01:36:57 2017 From: francis at daoine.org (Francis Daly) Date: Sat, 11 Mar 2017 01:36:57 +0000 Subject: configuration nginx server block [virtual host] with Ipv6. In-Reply-To: References: <20170311010011.GK15209@daoine.org> Message-ID: <20170311013657.GL15209@daoine.org> On Fri, Mar 10, 2017 at 09:06:03PM -0400, Wilmer Arambula wrote: Hi there, > The websites pages do not load, not open, and write anything in te log, What response do you get when you do something like curl -v -g -H Host:domain1.com 'http://[::1]:80/' on the server itself? "Nothing in the log" usually means that the request is not getting to nginx at all. f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Sat Mar 11 03:41:47 2017 From: nginx-forum at forum.nginx.org (patweb99) Date: Fri, 10 Mar 2017 22:41:47 -0500 Subject: Random rewrite/proxy_pass timeouts Message-ID: I'm seeing some strange behavior with nginx. I using it to proxy requests from one domain to another. What I'm seeing is at times requests to help.example.com start hanging. In order to fix it I need to either reload or restart the nginx service, which makes me think it's a resource issue but thought I'd check here first. Has anyone experienced anything like this before? Here are a few technical details: - nginx version: nginx/1.10.1 - t2.small in AWS - Fronted by a classic ELB I've also attached my nginx.conf and site.conf for reference. I have a few other sites in use that also use proxy_pass and it works just fine. So I'm thinking it may have something to do with the rewrite, but not 100% sure. ngnix.conf include /usr/share/nginx/modules/*; # nginx Configuration File # http://wiki.nginx.org/Configuration # Run as a less privileged user for security reasons. user www-data; # How many worker threads to run; # "auto" sets it to the number of CPU cores available in the system, and # offers the best performance. Don't set it higher than the number of CPU # cores if changing this parameter. # The maximum number of connections for Nginx is calculated by: # max_clients = worker_processes * worker_connections worker_processes 1; # Maximum open file descriptors per process; # should be > worker_connections. worker_rlimit_nofile 8192; events { # When you need > 8000 * cpu_cores connections, you start optimizing your OS, # and this is probably the point at which you hire people who are smarter than # you, as this is *a lot* of requests. worker_connections 8000; } # Default error log file # (this is only used when you don't override error_log on a server{} level) # options are also notice and info error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; http { # Hide nginx version information. server_tokens off; # Define the MIME types for files. include /etc/nginx/mime.types; default_type application/octet-stream; # Update charset_types due to updated mime.types charset_types text/xml text/plain text/vnd.wap.wml application/x-javascript application/rss+xml text/css application/javascript application/json; # Format to use in log files log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; # Default log file # (this is only used when you don't override access_log on a server{} level) access_log /var/log/nginx/access.log main; # How long to allow each connection to stay idle; longer values are better # for each individual client, particularly for SSL, but means that worker # connections are tied up longer. (Default: 65) keepalive_timeout 35; # Speed up file transfers by using sendfile() to copy directly # between descriptors rather than using read()/write(). sendfile on; # Tell Nginx not to send out partial frames; this increases throughput # since TCP frames are filled up before being sent out. (adds TCP_CORK) tcp_nopush on; # Compression # Enable Gzip compressed. gzip on; # Compression level (1-9). # 5 is a perfect compromise between size and cpu usage, offering about # 75% reduction for most ascii files (almost identical to level 9). gzip_comp_level 5; # Don't compress anything that's already small and unlikely to shrink much # if at all (the default is 20 bytes, which is bad as that usually leads to # larger files after gzipping). gzip_min_length 256; # Compress data even for clients that are connecting to us via proxies, # identified by the "Via" header (required for CloudFront). gzip_proxied any; # Tell proxies to cache both the gzipped and regular version of a resource # whenever the client's Accept-Encoding capabilities header varies; # Avoids the issue where a non-gzip capable client (which is extremely rare # today) would display gibberish if their proxy gave them the gzipped version. gzip_vary on; # Compress all output labeled with one of the following MIME-types. gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rdf+xml application/rss+xml application/schema+json application/vnd.geo+json application/vnd.ms-fontobject application/x-font-ttf application/x-javascript application/x-web-app-manifest+json application/xhtml+xml application/xml font/eot font/opentype image/bmp image/svg+xml image/vnd.microsoft.icon image/x-icon text/cache-manifest text/css text/javascript text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy text/xml; # text/html is always compressed by HttpGzipModule # This should be turned on if you are going to have pre-compressed copies (.gz) of # static files available. If not it should be left off as it will cause extra I/O # for the check. It is best if you enable this in a location{} block for # a specific directory, or on an individual server{} level. # gzip_static on; # For behind a load balancer real_ip_header X-Forwarded-For; set_real_ip_from 0.0.0.0/0; client_max_body_size 100m; # default blank catch-all server { listen 80 default_server; root /var/www/html/default; index index.html; } # Include files in the sites-enabled folder. server{} configuration files should be # placed in the sites-available folder, and then the configuration should be enabled # by creating a symlink to it in the sites-available folder. # See doc/sites-enabled.md for more info. include /etc/nginx/sites-enabled/*; } site.conf server { listen 80; server_name help.example.com; access_log /var/log/nginx/access.log access; error_log /var/log/nginx/error.log error; location /assets/ { proxy_pass https://site.help/assets/; proxy_set_header Host site.help; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Scheme https; } location /_api/ { proxy_pass https://site.help/_api/; proxy_set_header Host site.help; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Scheme https; } location / { rewrite ^/(.*)/$ /example/$1/ permanent; proxy_pass https://site.help; proxy_set_header Host help.site.com; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Proto https; } # Deny config files location ~* \.(sh|lock|json)$ { deny all; } # Deny any hidden and old version of files location ~ /\. { access_log off; log_not_found off; deny all; } location ~ ~$ { access_log off; log_not_found off; deny all; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272887,272887#msg-272887 From yongtao_you at yahoo.com Sat Mar 11 05:13:38 2017 From: yongtao_you at yahoo.com (Yongtao You) Date: Sat, 11 Mar 2017 05:13:38 +0000 (UTC) Subject: Conflict between form-input-nginx-module and nginx-auth-request-module? In-Reply-To: <1719659735.1981960.1489064303554@mail.yahoo.com> References: <1719659735.1981960.1489064303554.ref@mail.yahoo.com> <1719659735.1981960.1489064303554@mail.yahoo.com> Message-ID: <1583805220.3341540.1489209218867@mail.yahoo.com> To answer my own question, even though I still don't see why it has anything to do with the auth-request module, but the reason requests are timing out is because form-input module called ngx_http_read_client_request_body(), which then set write_event_handler to ngx_http_request_empty_handler to block write events. This seems to block the event from being forwarded to the backend (via proxy_pass). I modified the ngx_http_read_client_request_body() implementation to not override the request's write_event_handler, and everything started working. No more timeouts. I don't know enough to understand the ramification of my change, even though it seems to fixed my immediate problem. Any insights will be greatly appreciated. Thanks.Yongtao On Thursday, March 9, 2017 8:58 PM, Yongtao You via nginx wrote: Hi, I was able to get both form-input-nginx-module and nginx-auth-request-module work fine, individually, with nginx-1.9.9. Putting them together, and the HTTP POST request just timeout (I never got any response back). Any one had any experience using both of them in the same location block? Thanks.Yongtao _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From ranshalit at gmail.com Sat Mar 11 07:07:06 2017 From: ranshalit at gmail.com (Ran Shalit) Date: Sat, 11 Mar 2017 09:07:06 +0200 Subject: upload xml file Message-ID: On Sat, Mar 11, 2017 at 12:19 AM, Jeff Dyke wrote: > what do you want it to do? if you're talking nginx without any application > backend you could do a lot with some lua locations, or you're going to pass > that request to another process, or serve a static (xml) file from the file > system. Hi Jeff, Thank you very much. I have requirement that application should "support webserver which includes xml and save xml file to filesystem" Does it mean that it actually should " serve a static (xml) file from the file" (the second option you mentioned). If yes - can you give some hints about it or where to read further about it ? Does it mean it just save and retrieve file parsing it to other commands ? (I have read somewhere that uploading a file parse the file, so I am a bit confused here) I am new with nginx, still learning it. Many thanks, Ran Nginx does support XML just fine, its all a matter of what you want > your application to do. > > On Fri, Mar 10, 2017 at 12:01 PM, Ran Shalit wrote: >> >> Hello, >> >> I am new with web servers and nginx. >> I would like to ask if nginx support xml , and what does it mean to >> upload xml to web server ? >> Does it just keep the xml as file in some directory , or does it do >> parse the xml file and do some actions ? >> >> Thank you, >> Ran >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From trashcan at ellael.org Sat Mar 11 08:07:54 2017 From: trashcan at ellael.org (Michael Grimm) Date: Sat, 11 Mar 2017 09:07:54 +0100 Subject: proxy_pass and weird behaviour Message-ID: <80B90152-6508-4AF2-B8AE-B5E0FF18BF22@ellael.org> Hi ? (This is nginx 1.11.10 and up to date FreeBSD STABLE-11) I recently implemented LE certificates for my virtual domains, which will be served at two hosts, accessed by round-robin DNS, aka two IP addresses. In order to get the acme challenges running, I did implement the following configuration: Host A and Host B: # port 80 server { include include/IPs-80; server_name example.com; location / { # redirect letsencrypt ACME challenge requests to local-at-host-A.lan location /.well-known/acme-challenge/ { proxy_pass http://local-at-host-A.lan; } # all other requests are redirect to https, permanently return 301 https://$server_name$request_uri; } } # port 443 [snip] Server local-at-host-A.lan (LE acme) finally serves the acme challenge directory: server { include include/IPs-80; server_name local-at-host-A.lan; # redirect all letsencrypt ACME challenges to one global directory location /.well-known/acme-challenge/ { root /var/www/acme/; } } Well, that is working, somehow, except: If the LE server addresses Host A, the challenge file is going to be retrieved instantaneously. If the LE server addresses Host B, only every *other* request is being served instantaneously: 1. access: immediately download 2. access: 60 s wait, then download 3. access: immediately download 4. access: 60 s wait, then download etc. Hmm, default proxy_connect_timeout is 60s, I know. But why every other connect? Every feedback on how to solve/debug that issue is highly welcome. Thanks and regards, Michael From igal at lucee.org Sat Mar 11 19:04:27 2017 From: igal at lucee.org (Igal @ Lucee.org) Date: Sat, 11 Mar 2017 11:04:27 -0800 Subject: upload xml file In-Reply-To: References: Message-ID: Ran, You would probably want to use an Application Server behind the Web Server (nginx). Then you would use the nginx to proxy the request to the application server, where you can write code in the language that the application server supports (for example, I use Lucee as an application server, but many people use PHP or JSP or a bunch of other technologies). In the application server you can do whatever you want with the request, e.g. read request parameters and build an XML document on the fly, and send it back to nginx which will send it back to the client. nginx is a web server, similar in function to apache httpd or IIS, but much better in my opinion (and in the opinion of most users on this list, I would think), but if you want to serve dynamic content then you should have an application server behind it. HTH, Igal Sapir Lucee Core Developer Lucee.org On 3/10/2017 11:07 PM, Ran Shalit wrote: > On Sat, Mar 11, 2017 at 12:19 AM, Jeff Dyke wrote: >> what do you want it to do? if you're talking nginx without any application >> backend you could do a lot with some lua locations, or you're going to pass >> that request to another process, or serve a static (xml) file from the file >> system. > Hi Jeff, > > Thank you very much. > I have requirement that application should "support webserver which > includes xml and save xml file to filesystem" > Does it mean that it actually should " serve a static (xml) file from > the file" (the second option you mentioned). > If yes - can you give some hints about it or where to read further about it ? > > Does it mean it just save and retrieve file parsing it to other > commands ? (I have read somewhere that uploading a file parse the > file, so I am a bit confused here) > > I am new with nginx, still learning it. > > Many thanks, > Ran > > Nginx does support XML just fine, its all a matter of what you want >> your application to do. >> >> On Fri, Mar 10, 2017 at 12:01 PM, Ran Shalit wrote: >>> Hello, >>> >>> I am new with web servers and nginx. >>> I would like to ask if nginx support xml , and what does it mean to >>> upload xml to web server ? >>> Does it just keep the xml as file in some directory , or does it do >>> parse the xml file and do some actions ? >>> >>> Thank you, >>> Ran >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From ranshalit at gmail.com Sun Mar 12 05:08:50 2017 From: ranshalit at gmail.com (Ran Shalit) Date: Sun, 12 Mar 2017 07:08:50 +0200 Subject: upload xml file In-Reply-To: References: Message-ID: On Mar 11, 2017 9:04 PM, "Igal @ Lucee.org" wrote: > > Ran, > > You would probably want to use an Application Server behind the Web Server (nginx). Then you would use the nginx to proxy the request to the application server, where you can write code in the language that the application server supports (for example, I use Lucee as an application server, but many people use PHP or JSP or a bunch of other technologies). > > In the application server you can do whatever you want with the request, e.g. read request parameters and build an XML document on the fly, and send it back to nginx which will send it back to the client. Hi, In case we just want to save xml file(like a binary file) and later read it, Is there any need for application in server or is it only nginx server required for such reuirement? Thank you, Ran > > nginx is a web server, similar in function to apache httpd or IIS, but much better in my opinion (and in the opinion of most users on this list, I would think), but if you want to serve dynamic content then you should have an application server behind it. > > HTH, > > Igal Sapir > Lucee Core Developer > Lucee.org > > On 3/10/2017 11:07 PM, Ran Shalit wrote: >> >> On Sat, Mar 11, 2017 at 12:19 AM, Jeff Dyke wrote: >>> >>> what do you want it to do? if you're talking nginx without any application >>> backend you could do a lot with some lua locations, or you're going to pass >>> that request to another process, or serve a static (xml) file from the file >>> system. >> >> Hi Jeff, >> >> Thank you very much. >> I have requirement that application should "support webserver which >> includes xml and save xml file to filesystem" >> Does it mean that it actually should " serve a static (xml) file from >> the file" (the second option you mentioned). >> If yes - can you give some hints about it or where to read further about it ? >> >> Does it mean it just save and retrieve file parsing it to other >> commands ? (I have read somewhere that uploading a file parse the >> file, so I am a bit confused here) >> >> I am new with nginx, still learning it. >> >> Many thanks, >> Ran >> >> Nginx does support XML just fine, its all a matter of what you want >>> >>> your application to do. >>> >>> On Fri, Mar 10, 2017 at 12:01 PM, Ran Shalit wrote: >>>> >>>> Hello, >>>> >>>> I am new with web servers and nginx. >>>> I would like to ask if nginx support xml , and what does it mean to >>>> upload xml to web server ? >>>> Does it just keep the xml as file in some directory , or does it do >>>> parse the xml file and do some actions ? >>>> >>>> Thank you, >>>> Ran >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From igal at lucee.org Sun Mar 12 05:26:22 2017 From: igal at lucee.org (Igal @ Lucee.org) Date: Sat, 11 Mar 2017 21:26:22 -0800 Subject: upload xml file In-Reply-To: References: Message-ID: If this is a static file, i.e. a file that you upload to the server and is served as-is, then you do not need anything beyond nginx. nginx does a great job at serving static content. Igal Sapir Lucee Core Developer Lucee.org On 3/11/2017 9:08 PM, Ran Shalit wrote: > > > On Mar 11, 2017 9:04 PM, "Igal @ Lucee.org" > wrote: > > > > Ran, > > > > You would probably want to use an Application Server behind the Web > Server (nginx). Then you would use the nginx to proxy the request to > the application server, where you can write code in the language that > the application server supports (for example, I use Lucee as an > application server, but many people use PHP or JSP or a bunch of other > technologies). > > > > In the application server you can do whatever you want with the > request, e.g. read request parameters and build an XML document on the > fly, and send it back to nginx which will send it back to the client. > > Hi, > In case we just want to save xml file(like a binary file) and later > read it, Is there any need for application in server or is it only > nginx server required for such reuirement? > Thank you, > Ran > > > > > nginx is a web server, similar in function to apache httpd or IIS, > but much better in my opinion (and in the opinion of most users on > this list, I would think), but if you want to serve dynamic content > then you should have an application server behind it. > > > > HTH, > > > > Igal Sapir > > Lucee Core Developer > > Lucee.org > > > > On 3/10/2017 11:07 PM, Ran Shalit wrote: > >> > >> On Sat, Mar 11, 2017 at 12:19 AM, Jeff Dyke > wrote: > >>> > >>> what do you want it to do? if you're talking nginx without any > application > >>> backend you could do a lot with some lua locations, or you're > going to pass > >>> that request to another process, or serve a static (xml) file from > the file > >>> system. > >> > >> Hi Jeff, > >> > >> Thank you very much. > >> I have requirement that application should "support webserver which > >> includes xml and save xml file to filesystem" > >> Does it mean that it actually should " serve a static (xml) file from > >> the file" (the second option you mentioned). > >> If yes - can you give some hints about it or where to read further > about it ? > >> > >> Does it mean it just save and retrieve file parsing it to other > >> commands ? (I have read somewhere that uploading a file parse the > >> file, so I am a bit confused here) > >> > >> I am new with nginx, still learning it. > >> > >> Many thanks, > >> Ran > >> > >> Nginx does support XML just fine, its all a matter of what you want > >>> > >>> your application to do. > >>> > >>> On Fri, Mar 10, 2017 at 12:01 PM, Ran Shalit > wrote: > >>>> > >>>> Hello, > >>>> > >>>> I am new with web servers and nginx. > >>>> I would like to ask if nginx support xml , and what does it mean to > >>>> upload xml to web server ? > >>>> Does it just keep the xml as file in some directory , or does it do > >>>> parse the xml file and do some actions ? > >>>> > >>>> Thank you, > >>>> Ran > >>>> _______________________________________________ > >>>> nginx mailing list > >>>> nginx at nginx.org > >>>> http://mailman.nginx.org/mailman/listinfo/nginx > >>> > >>> > >>> > >>> _______________________________________________ > >>> nginx mailing list > >>> nginx at nginx.org > >>> http://mailman.nginx.org/mailman/listinfo/nginx > >> > >> _______________________________________________ > >> nginx mailing list > >> nginx at nginx.org > >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From fsantiago at garbage-juice.com Sun Mar 12 15:54:15 2017 From: fsantiago at garbage-juice.com (Fabian A. Santiago) Date: Sun, 12 Mar 2017 15:54:15 +0000 Subject: Nginx serving extra ssl certs Message-ID: Hello nginx world, I hope you can help me track down my issue. First, I'm running: Centos 7.3.1611 Nginx 1.11.10 Openssl 1.0.1e-fips My issue is I run 11 virtual sites, all listening on both ipv4 & 6, same two addresses, so obviously I rely on SNI. One site also listens on tor. When I check the ssl responses using either ssllabs server test or openssl s_client, my sites work fine but also serve an extra 2nd cert meant for the wrong hostname. I'm confused as I see no issue with my config files. I've attached a sample of my config files for one site for your perusal. You can also check this domain for yourself: server1.garbage-juice.com Thanks for your help. -- Thanks. Fabian S. -------------- next part -------------- A non-text attachment was scrubbed... Name: Documents.7z Type: application/octet-stream Size: 1304 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 870 bytes Desc: not available URL: From r1ch+nginx at teamliquid.net Sun Mar 12 19:58:41 2017 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Sun, 12 Mar 2017 20:58:41 +0100 Subject: Nginx serving extra ssl certs In-Reply-To: References: Message-ID: Your configs look fine, what you are seeing is the certificate that is sent if a client does not support SNI. You can control which certificate is chosen using the default_server parameter on your listen directive. On Sun, Mar 12, 2017 at 4:54 PM, Fabian A. Santiago < fsantiago at garbage-juice.com> wrote: > Hello nginx world, > > I hope you can help me track down my issue. > > First, I'm running: > > Centos 7.3.1611 > Nginx 1.11.10 > Openssl 1.0.1e-fips > > My issue is I run 11 virtual sites, all listening on both ipv4 & 6, same > two addresses, so obviously I rely on SNI. One site also listens on tor. > > When I check the ssl responses using either ssllabs server test or openssl > s_client, my sites work fine but also serve an extra 2nd cert meant for the > wrong hostname. I'm confused as I see no issue with my config files. > > I've attached a sample of my config files for one site for your perusal. > > You can also check this domain for yourself: > > server1.garbage-juice.com > > Thanks for your help. > > > -- > Thanks. > Fabian S. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fsantiago at garbage-juice.com Sun Mar 12 20:27:58 2017 From: fsantiago at garbage-juice.com (Fabian A. Santiago) Date: Sun, 12 Mar 2017 20:27:58 +0000 Subject: Nginx serving extra ssl certs In-Reply-To: References: Message-ID: <87045FD9-A69E-433A-AF0B-6BB7E794AE6A@garbage-juice.com> On March 12, 2017 3:58:41 PM EDT, Richard Stanway wrote: >Your configs look fine, what you are seeing is the certificate that is >sent >if a client does not support SNI. You can control which certificate is >chosen using the default_server parameter on your listen directive. > >On Sun, Mar 12, 2017 at 4:54 PM, Fabian A. Santiago < >fsantiago at garbage-juice.com> wrote: > >> Hello nginx world, >> >> I hope you can help me track down my issue. >> >> First, I'm running: >> >> Centos 7.3.1611 >> Nginx 1.11.10 >> Openssl 1.0.1e-fips >> >> My issue is I run 11 virtual sites, all listening on both ipv4 & 6, >same >> two addresses, so obviously I rely on SNI. One site also listens on >tor. >> >> When I check the ssl responses using either ssllabs server test or >openssl >> s_client, my sites work fine but also serve an extra 2nd cert meant >for the >> wrong hostname. I'm confused as I see no issue with my config files. >> >> I've attached a sample of my config files for one site for your >perusal. >> >> You can also check this domain for yourself: >> >> server1.garbage-juice.com >> >> Thanks for your help. >> >> >> -- >> Thanks. >> Fabian S. >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> Oh, that makes sense. Ok, I guess I just never noticed that before. And also thought that default site wouldn't be sent unless it knew of no SNI already. Thanks. That was easy. -- Thanks. Fabian S. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 870 bytes Desc: not available URL: From nginx-forum at forum.nginx.org Mon Mar 13 06:13:10 2017 From: nginx-forum at forum.nginx.org (nginxsantos) Date: Mon, 13 Mar 2017 02:13:10 -0400 Subject: OAuth Access token validation Message-ID: <840acdde6121f8301091ce84736220f3.NginxMailingListEnglish@forum.nginx.org> Hi, Does Nginx provide the support for verifying the access token in the incoming request from an Identity Server? Regards, Santos Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272899,272899#msg-272899 From reallfqq-nginx at yahoo.fr Mon Mar 13 11:01:46 2017 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 13 Mar 2017 12:01:46 +0100 Subject: OAuth Access token validation In-Reply-To: <840acdde6121f8301091ce84736220f3.NginxMailingListEnglish@forum.nginx.org> References: <840acdde6121f8301091ce84736220f3.NginxMailingListEnglish@forum.nginx.org> Message-ID: nginx can authenticate users based on subrequests to an identity server, yes, RTFM: https://nginx.org/en/docs/http/ngx_http_auth_request_module.html If you want to use JSON Web Tokens, only the non-FOSS version will be able to help you: https://nginx.org/en/docs/http/ngx_http_auth_jwt_module.html --- *B. R.* On Mon, Mar 13, 2017 at 7:13 AM, nginxsantos wrote: > Hi, > > Does Nginx provide the support for verifying the access token in the > incoming request from an Identity Server? > > > Regards, Santos > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,272899,272899#msg-272899 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Mar 13 12:44:31 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 13 Mar 2017 15:44:31 +0300 Subject: proxy_pass and weird behaviour In-Reply-To: <80B90152-6508-4AF2-B8AE-B5E0FF18BF22@ellael.org> References: <80B90152-6508-4AF2-B8AE-B5E0FF18BF22@ellael.org> Message-ID: <20170313124431.GB13617@mdounin.ru> Hello! On Sat, Mar 11, 2017 at 09:07:54AM +0100, Michael Grimm wrote: [...] > Well, that is working, somehow, except: If the LE server > addresses Host A, the challenge file is going to be retrieved > instantaneously. If the LE server addresses Host B, only every > *other* request is being served instantaneously: > > 1. access: immediately download > 2. access: 60 s wait, then download > 3. access: immediately download > 4. access: 60 s wait, then download > etc. > > > Hmm, default proxy_connect_timeout is 60s, I know. But why every > other connect? You are using "proxy_pass http://local-at-host-A.lan;" in your configuration. What are the IP addresses it resolves to? The behaviour observed suggests that the name resolves to 2 different addresses, so nginx uses round-robin to balance between these addresses, and only one of these addresses is reacheable. The exact pattern also requires more than 10 seconds between (2) and (4), else (4) will be directed to a properly working address, see http://nginx.org/en/docs/http/ngx_http_upstream_module.html#fail_timeout. Though it is something likely to happen when testing manually. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Mar 13 13:38:03 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 13 Mar 2017 16:38:03 +0300 Subject: Conflict between form-input-nginx-module and nginx-auth-request-module? In-Reply-To: <1583805220.3341540.1489209218867@mail.yahoo.com> References: <1719659735.1981960.1489064303554.ref@mail.yahoo.com> <1719659735.1981960.1489064303554@mail.yahoo.com> <1583805220.3341540.1489209218867@mail.yahoo.com> Message-ID: <20170313133803.GD13617@mdounin.ru> Hello! On Sat, Mar 11, 2017 at 05:13:38AM +0000, Yongtao You via nginx wrote: > To answer my own question, even though I still don't see why it has anything to do with the auth-request module, but the reason requests are timing out is because form-input module called ngx_http_read_client_request_body(), which then set write_event_handler to ngx_http_request_empty_handler to block write events. This seems to block the event from being forwarded to the backend (via proxy_pass). I modified the ngx_http_read_client_request_body() implementation to not override the request's write_event_handler, and everything started working. No more timeouts. > I don't know enough to understand the ramification of my change, even though it seems to fixed my immediate problem. Any insights will be greatly appreciated. The ngx_http_read_client_request_body() function must overwrite request handlers, including r->write_event_handler, or an write event while reading a request body will result in unexpected additional processing. Proper solution would be to fix the form-input module to restore r->write_event_handler to ngx_http_core_run_phases() after the request body has been read (and before the module calls ngx_http_core_run_phases() again). -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Mon Mar 13 14:22:09 2017 From: nginx-forum at forum.nginx.org (larsg) Date: Mon, 13 Mar 2017 10:22:09 -0400 Subject: Reverse Proxy with 500k connections In-Reply-To: References: Message-ID: <526e090e93b14086f8f48677f21a3006.NginxMailingListEnglish@forum.nginx.org> Hi Guys, we solved the problem and I wanted to give you feedback about the solution. Finally it was an problem with our linux ip routes. After implementing source based policy routing this nginx configuration worked. Thank you for your support! Kind Regards Lars Summary of Solution: split_clients "${remote_addr}${remote_port}AAAA" $source_ip { 10% 192.168.1.130; 10% 192.168.1.131; ... * 192.168.1.139; } server { listen 443 ssl backlog=163840; proxy_bind $source_ip; ... ip rule ls 0: from all lookup local 32754: from 192.168.1.139 lookup es-source-eth9 ... 32756: from 192.168.1.130 lookup es-source-eth0 32766: from all lookup main 32767: from all lookup default ip route list table es-source-eth9 192.168.1.0/24 dev eth9 scope link Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272808,272915#msg-272915 From nginx-forum at forum.nginx.org Mon Mar 13 14:38:12 2017 From: nginx-forum at forum.nginx.org (larsg) Date: Mon, 13 Mar 2017 10:38:12 -0400 Subject: proxy_bind with hostname from /etc/hosts possible? Message-ID: Hi! is it possible to use an hostname from local /etc/hosts as proxy_bind value? In our current Background: We use nginx 1.8.1 as reverse proxy. In order to overcome the "Overcoming Ephemeral Port Exhaustion" problem (64k+ connections), we use proxy_bind to iterate over all loccally available IP addresses and assign them as source IP (see https://www.nginx.com/blog/overcoming-ephemeral-port-exhaustion-nginx-plus/) In order to have an generic nginx configuration for all of our nginx instances, we don't want to hard code server specific IPs in the nginx.conf but use hostnames that are defined in the local /etc/hosts. You can see our current configuration above. Unfortunately nginx cannot resolve the hostname (localip0 etc.). There is an error log "invalid local address "localip0"...). We also tested the usage of upstream directive. Same result. I'm worry that I only can use explicit IP addresses in this situation. Or do you have an alternative solution? /etc/host: 192.168.1.130 localip0 192.168.1.132 localip1 ... nginx.conf: split_clients "${remote_addr}${remote_port}AAAA" $source_ip { 10% localip0; 10% localip1; ... } server { listen 443; proxy_bind $source_ip; ... Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272918,272918#msg-272918 From simpot at simpot.com Tue Mar 14 07:20:35 2017 From: simpot at simpot.com (Dmitry Saratsky) Date: Tue, 14 Mar 2017 07:20:35 +0000 Subject: adding LUA module into yum repo? In-Reply-To: References: Message-ID: Hello, Is there any plans to add LUA module into official nginx Yum repository? Thanks a lot! -------------- next part -------------- An HTML attachment was scrubbed... URL: From yongtao_you at yahoo.com Tue Mar 14 07:27:43 2017 From: yongtao_you at yahoo.com (Yongtao You) Date: Tue, 14 Mar 2017 07:27:43 +0000 (UTC) Subject: Conflict between form-input-nginx-module and nginx-auth-request-module? In-Reply-To: <20170313133803.GD13617@mdounin.ru> References: <1719659735.1981960.1489064303554.ref@mail.yahoo.com> <1719659735.1981960.1489064303554@mail.yahoo.com> <1583805220.3341540.1489209218867@mail.yahoo.com> <20170313133803.GD13617@mdounin.ru> Message-ID: <1718883862.4982566.1489476463632@mail.yahoo.com> Thanks! On Monday, March 13, 2017 9:38 PM, Maxim Dounin wrote: Hello! On Sat, Mar 11, 2017 at 05:13:38AM +0000, Yongtao You via nginx wrote: > To answer my own question, even though I still don't see why it has anything to do with the auth-request module, but the reason requests are timing out is because form-input module called ngx_http_read_client_request_body(), which then set write_event_handler to ngx_http_request_empty_handler to block write events. This seems to block the event from being forwarded to the backend (via proxy_pass). I modified the ngx_http_read_client_request_body() implementation to not override the request's write_event_handler, and everything started working. No more timeouts. > I don't know enough to understand the ramification of my change, even though it seems to fixed my immediate problem. Any insights will be greatly appreciated. The ngx_http_read_client_request_body() function must overwrite request handlers, including r->write_event_handler, or an write event while reading a request body will result in unexpected additional processing. Proper solution would be to fix the form-input module to restore r->write_event_handler to ngx_http_core_run_phases() after the request body has been read (and before the module calls ngx_http_core_run_phases() again). -- Maxim Dounin http://nginx.org/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From trashcan at ellael.org Tue Mar 14 14:16:17 2017 From: trashcan at ellael.org (Michael Grimm) Date: Tue, 14 Mar 2017 15:16:17 +0100 Subject: proxy_pass and weird behaviour In-Reply-To: <20170313124431.GB13617@mdounin.ru> References: <80B90152-6508-4AF2-B8AE-B5E0FF18BF22@ellael.org> <20170313124431.GB13617@mdounin.ru> Message-ID: Maxim Dounin wrote: > On Sat, Mar 11, 2017 at 09:07:54AM +0100, Michael Grimm wrote: > [...] > >> Well, that is working, somehow, except: If the LE server >> addresses Host A, the challenge file is going to be retrieved >> instantaneously. If the LE server addresses Host B, only every >> *other* request is being served instantaneously: >> >> 1. access: immediately download >> 2. access: 60 s wait, then download >> 3. access: immediately download >> 4. access: 60 s wait, then download >> etc. >> >> >> Hmm, default proxy_connect_timeout is 60s, I know. But why every >> other connect? > > You are using "proxy_pass http://local-at-host-A.lan;" in your > configuration. What are the IP addresses it resolves to? > > The behaviour observed suggests that the name resolves to 2 > different addresses, so nginx uses round-robin to balance between > these addresses, and only one of these addresses is reacheable. Bingo! I had had two issues in that regard: My local resolver returned one IPv4 and on IPv6 address for local-at-host-A.lan, and in my server block I had had an include statement with listen statements for IPv4 and IPv6 addresses. (Those were left-overs I didn't bear in mind when removing IPv6 functionality for that given nginx server.) Now, everything is working as expected. Thank you very much for pointing me to the right direction! With kind regards, Michael From nginx-forum at forum.nginx.org Tue Mar 14 22:01:45 2017 From: nginx-forum at forum.nginx.org (jaigupta) Date: Tue, 14 Mar 2017 18:01:45 -0400 Subject: upstream sent unexpected FastCGI record: 3 while reading response header from upstream Message-ID: <5fdf6b644abe9b708902a425935ced2d.NginxMailingListEnglish@forum.nginx.org> I am getting random "upstream sent unexpected FastCGI record: 3 while reading response header from upstream" while using Nginx with PHP7. So far I was waiting for bug https://bugs.php.net/bug.php?id=67583 to be fixed by PHP which they now have but I still get this error. Everything works fine for few days and then suddenly I start getting this error. How can I debug this in detail? Is there a way I could see or log response from PHP (FastCGI) when this error occurs? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272936,272936#msg-272936 From ranshalit at gmail.com Wed Mar 15 07:16:49 2017 From: ranshalit at gmail.com (Ran Shalit) Date: Wed, 15 Mar 2017 09:16:49 +0200 Subject: nginx with ssl as reverse proxy Message-ID: Hello, I have followed the article which explains how to configure nginx with ssl as reverse proxy for Jenkins: https://www.digitalocean.com/community/tutorials/how-to-configure-nginx-with-ssl-as-a-reverse-proxy-for-jenkins Yet, I don't understand one thing, The ssl key is configured only for nginx: client ------- nginx proxy ----Jenkins server Doesn't Jenkins server need to be provided with keys too ? Thank you, Ran From nginx-forum at forum.nginx.org Wed Mar 15 10:02:08 2017 From: nginx-forum at forum.nginx.org (GuiltyNL) Date: Wed, 15 Mar 2017 06:02:08 -0400 Subject: HTTP/2 seems slow Message-ID: <0aadfe8996034f1772021724aab0a2e8.NginxMailingListEnglish@forum.nginx.org> Hi guys, I'm working on a HTTP/2 website solution and when testing we can see HTTP/2 works. The thing is that everything loads at the same time, but very slow. While the bandwith cannot be the problem. At the end HTTP/2 is a bit slower than HTTP/1.1 For example: When I use HTTP/1.1 I have 5 files loading after each other. The time to first byte needed per file is between 13 and 20 ms (plus some loading time) When I use HTTP/2 The same 5 files load at the same time, but the time to first byte needed per file is between 40 to 60 ms. (plus some loading time). Where do I need to look? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272944,272944#msg-272944 From hemelaar at desikkel.nl Wed Mar 15 10:05:39 2017 From: hemelaar at desikkel.nl (Jean-Paul Hemelaar) Date: Wed, 15 Mar 2017 11:05:39 +0100 Subject: 200ms delay when serving stale content and proxy_cache_background_update enabled Message-ID: Hi, I noticed a delay of approx. 200ms when the proxy_cache_background_update is used and Nginx sends stale content to the client. Current setup: - Apache webserver as backend sending a slow response delay.php that simply waits for 1 second: - Nginx in front to cache the response, and send stale content it the cache needs to be refreshed. - wget sending a request from another machine Nginx config-block: location /delay.php { proxy_pass http://backend; proxy_next_upstream error timeout invalid_header; proxy_redirect http://$host:8000/ http://$host/; proxy_buffering on; proxy_connect_timeout 1; proxy_read_timeout 30; proxy_cache_background_update on; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_cache STATIC; proxy_cache_key "$scheme$host$request_uri"; proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Accept-Encoding ""; # Just to test if this caused the issue, but it doesn't change tcp_nodelay on; } Wget request: time wget --server-response --output-document=/dev/null " http://www.example.com/delay.php?teststales=true" Snippet of wget output: X-Cached: STALE Output of time command: real 0m0.253s Wget request: time wget --server-response --output-document=/dev/null " http://www.example.com/delay.php?teststales=true" Snippet of wget output: X-Cached: UPDATING Output of time command: real 0m0.022s So a cache HIT (not shown) or an UPDATING are fast, sending a STALE response takes some time. Tcpdump showed that all HTML content and headers are send immediately after the request has been received, but the last package will be delayed; that's why I tested the tcp_nodelay option in the config. I'm running version 1.11-10 with the patch provided by Maxim: http://hg.nginx.org/nginx/rev/8b7fd958c59f Any idea's on this? Thanks, Jean-Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Mar 15 10:08:30 2017 From: nginx-forum at forum.nginx.org (GuiltyNL) Date: Wed, 15 Mar 2017 06:08:30 -0400 Subject: HTTP/2 seems slow In-Reply-To: <0aadfe8996034f1772021724aab0a2e8.NginxMailingListEnglish@forum.nginx.org> References: <0aadfe8996034f1772021724aab0a2e8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3d0f660e0045bca0879872aec5646740.NginxMailingListEnglish@forum.nginx.org> BTW I did read this: https://www.section.io/blog/Why-do-I-see-long-TTFB-with-HTTP2-in-WebPageTest/ But I would expect that the page itself would load (a bit) faster, and that is not the case: HTTP/1.1 Document Complete 1,777 sec HTTP/2 Document Complete 2,128 sec Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272944,272946#msg-272946 From mdounin at mdounin.ru Wed Mar 15 11:13:41 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 15 Mar 2017 14:13:41 +0300 Subject: upstream sent unexpected FastCGI record: 3 while reading response header from upstream In-Reply-To: <5fdf6b644abe9b708902a425935ced2d.NginxMailingListEnglish@forum.nginx.org> References: <5fdf6b644abe9b708902a425935ced2d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170315111341.GF13617@mdounin.ru> Hello! On Tue, Mar 14, 2017 at 06:01:45PM -0400, jaigupta wrote: > I am getting random "upstream sent unexpected FastCGI record: 3 while > reading response header from upstream" while using Nginx with PHP7. So far I > was waiting for bug https://bugs.php.net/bug.php?id=67583 to be fixed by PHP > which they now have but I still get this error. > > Everything works fine for few days and then suddenly I start getting this > error. How can I debug this in detail? Is there a way I could see or log > response from PHP (FastCGI) when this error occurs? Try debug log, see http://nginx.org/en/docs/debugging_log.html for details. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Mar 15 11:21:16 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 15 Mar 2017 14:21:16 +0300 Subject: nginx with ssl as reverse proxy In-Reply-To: References: Message-ID: <20170315112116.GH13617@mdounin.ru> Hello! On Wed, Mar 15, 2017 at 09:16:49AM +0200, Ran Shalit wrote: > Hello, > > I have followed the article which explains how to configure nginx with > ssl as reverse proxy for Jenkins: > https://www.digitalocean.com/community/tutorials/how-to-configure-nginx-with-ssl-as-a-reverse-proxy-for-jenkins > > Yet, I don't understand one thing, > The ssl key is configured only for nginx: > > client ------- nginx proxy ----Jenkins server > > > Doesn't Jenkins server need to be provided with keys too ? In the article in question nginx speaks to Jenkins via http (not https), so there is no need to configure SSL certificates/keys on Jenkins. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Wed Mar 15 11:22:04 2017 From: nginx-forum at forum.nginx.org (jaigupta) Date: Wed, 15 Mar 2017 07:22:04 -0400 Subject: upstream sent unexpected FastCGI record: 3 while reading response header from upstream In-Reply-To: <20170315111341.GF13617@mdounin.ru> References: <20170315111341.GF13617@mdounin.ru> Message-ID: Thanks Maxim. Would this also log messages between Nginx and FastCGI when error occurs? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272936,272953#msg-272953 From minoru.nishikubo at lyz.jp Wed Mar 15 11:47:15 2017 From: minoru.nishikubo at lyz.jp (Nishikubo Minoru) Date: Wed, 15 Mar 2017 20:47:15 +0900 Subject: nginx limit_req_module with upstream limit_rate Message-ID: Hello, We want to limit outgoing(upstream) rate with the fixed string key among various virtual hosts as follows: limit_req_zone fixedstring zone=upstream:1m rate=5000r/s; server { server_name vhosta; logation / { limit_req zone=upstream burst=25; proxy_pass http://some_upstream; } } server { server_name vhostb; logation / { limit_req zone=vhostb nodelay; limit_req zone=upstream burst=25; proxy_pass http://some_upstream; } } But on our test, the nginx server send to upstream server 6738 requests in a second. (The vhosta sent 3677 requests, and the vhostb sent 3061 requests) It seems that each virtual hosts limits 5000r/s and entire nginx will 10000r/s. Anyway, we will set limit_req_log_level to info level. Does anyone know limit_req detailed log information? Our nginx version is nginx version: nginx/1.10.1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Mar 15 12:50:46 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 15 Mar 2017 15:50:46 +0300 Subject: nginx limit_req_module with upstream limit_rate In-Reply-To: References: Message-ID: <20170315125046.GI13617@mdounin.ru> Hello! On Wed, Mar 15, 2017 at 08:47:15PM +0900, Nishikubo Minoru wrote: > We want to limit outgoing(upstream) rate with the fixed string key among > various virtual hosts as follows: > > limit_req_zone fixedstring zone=upstream:1m rate=5000r/s; > > server { > server_name vhosta; > logation / { If this is an exact configuration you've tried to test with, you've probably tested something very different, as there is no "logation" directive in nginx. [...] > But on our test, the nginx server send to upstream server 6738 requests in > a second. > (The vhosta sent 3677 requests, and the vhostb sent 3061 requests) Note that testing exact number of requests in a particular second doesn't really make sense as small time difference in time will introduce large errors. Try measuring the average rate of requests for a larger period of time. > It seems that each virtual hosts limits 5000r/s and entire nginx will > 10000r/s. No, this is not how it works. > Anyway, we will set limit_req_log_level to info level. > Does anyone know limit_req detailed log information? Detailed information can be found in debug log, see http://nginx.org/en/docs/debugging_log.html. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Wed Mar 15 13:16:19 2017 From: nginx-forum at forum.nginx.org (jaigupta) Date: Wed, 15 Mar 2017 09:16:19 -0400 Subject: upstream sent unexpected FastCGI record: 3 while reading response header from upstream In-Reply-To: <5fdf6b644abe9b708902a425935ced2d.NginxMailingListEnglish@forum.nginx.org> References: <5fdf6b644abe9b708902a425935ced2d.NginxMailingListEnglish@forum.nginx.org> Message-ID: I increased log level to debug and below is what I received. [error] 7901#7901: *47646 upstream sent unexpected FastCGI record: 3 while reading response header from upstream, client: xx.xx.xx.xx, server: www.xxxxx.com, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/var/run/php-fpm/xxx.fpm.sock:", host: "www.xxxxx.com" Is there a way I could troubleshoot this or get more information? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272936,272962#msg-272962 From minoru.nishikubo at lyz.jp Wed Mar 15 14:19:01 2017 From: minoru.nishikubo at lyz.jp (Nishikubo Minoru) Date: Wed, 15 Mar 2017 23:19:01 +0900 Subject: nginx limit_req_module with upstream limit_rate In-Reply-To: <20170315125046.GI13617@mdounin.ru> References: <20170315125046.GI13617@mdounin.ru> Message-ID: > If this is an exact configuration you've tried to test with, > you've probably tested something very different, as there is no > "logation" directive in nginx. Oh, location is correct. http { limit_req_zone fixedstring zone=upstream:1m rate=5000r/s; } server { server_name vhosta; location / { limit_req zone=upstream burst=25; proxy_pass http://some_upstream; } } server { server_name vhostb; location / { limit_req zone=vhostb nodelay; limit_req zone=upstream burst=25; proxy_pass http://some_upstream; } } > > It seems that each virtual hosts limits 5000r/s and entire nginx will > > 10000r/s. > > No, this is not how it works. Thanks for explict answer. > Detailed information can be found in debug log, see > http://nginx.org/en/docs/debugging_log.html. We will try, thanks again. On Wed, Mar 15, 2017 at 9:50 PM, Maxim Dounin wrote: > Hello! > > On Wed, Mar 15, 2017 at 08:47:15PM +0900, Nishikubo Minoru wrote: > > > We want to limit outgoing(upstream) rate with the fixed string key among > > various virtual hosts as follows: > > > > limit_req_zone fixedstring zone=upstream:1m rate=5000r/s; > > > > server { > > server_name vhosta; > > logation / { > > If this is an exact configuration you've tried to test with, > you've probably tested something very different, as there is no > "logation" directive in nginx. > > [...] > > > But on our test, the nginx server send to upstream server 6738 requests > in > > a second. > > (The vhosta sent 3677 requests, and the vhostb sent 3061 requests) > > Note that testing exact number of requests in a particular second > doesn't really make sense as small time difference in time will > introduce large errors. Try measuring the average rate of > requests for a larger period of time. > > > It seems that each virtual hosts limits 5000r/s and entire nginx will > > 10000r/s. > > No, this is not how it works. > > > Anyway, we will set limit_req_log_level to info level. > > Does anyone know limit_req detailed log information? > > Detailed information can be found in debug log, see > http://nginx.org/en/docs/debugging_log.html. > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Mar 15 14:49:42 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 15 Mar 2017 17:49:42 +0300 Subject: upstream sent unexpected FastCGI record: 3 while reading response header from upstream In-Reply-To: References: <5fdf6b644abe9b708902a425935ced2d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170315144942.GJ13617@mdounin.ru> Hello! On Wed, Mar 15, 2017 at 09:16:19AM -0400, jaigupta wrote: > I increased log level to debug and below is what I received. > > [error] 7901#7901: *47646 upstream sent unexpected FastCGI record: 3 while > reading response header from upstream, client: xx.xx.xx.xx, server: > www.xxxxx.com, request: "GET / HTTP/1.1", upstream: > "fastcgi://unix:/var/run/php-fpm/xxx.fpm.sock:", host: "www.xxxxx.com" > > Is there a way I could troubleshoot this or get more information? Please read the link provided: http://nginx.org/en/docs/debugging_log.html In particular, make sure that nginx you are using is built with debugging support enabled. If the above is the only message you see with the loging level set to debug, the answer is probably "no". -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Wed Mar 15 15:07:40 2017 From: nginx-forum at forum.nginx.org (jaigupta) Date: Wed, 15 Mar 2017 11:07:40 -0400 Subject: upstream sent unexpected FastCGI record: 3 while reading response header from upstream In-Reply-To: <20170315144942.GJ13617@mdounin.ru> References: <20170315144942.GJ13617@mdounin.ru> Message-ID: Thank you. Nginx debug enabled for real. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272936,272966#msg-272966 From minoru.nishikubo at lyz.jp Wed Mar 15 16:18:12 2017 From: minoru.nishikubo at lyz.jp (Nishikubo Minoru) Date: Thu, 16 Mar 2017 01:18:12 +0900 Subject: nginx limit_req_module with upstream limit_rate In-Reply-To: References: <20170315125046.GI13617@mdounin.ru> Message-ID: I noticed our nginx configuration 5000r/s limits, that is the test program must generate requests between less than 0.2msec, the tester was misconfigured the test condition. Thanks for great hint :-) On Wed, Mar 15, 2017 at 11:19 PM, Nishikubo Minoru wrote: > > If this is an exact configuration you've tried to test with, > > you've probably tested something very different, as there is no > > "logation" directive in nginx. > > Oh, location is correct. > > http { > limit_req_zone fixedstring zone=upstream:1m rate=5000r/s; > } > > server { > server_name vhosta; > location / { > limit_req zone=upstream burst=25; > proxy_pass http://some_upstream; > } > } > server { > server_name vhostb; > location / { > limit_req zone=vhostb nodelay; > limit_req zone=upstream burst=25; > proxy_pass http://some_upstream; > } > } > > > > It seems that each virtual hosts limits 5000r/s and entire nginx will > > > 10000r/s. > > > > No, this is not how it works. > > Thanks for explict answer. > > > Detailed information can be found in debug log, see > > http://nginx.org/en/docs/debugging_log.html. > > We will try, thanks again. > > > On Wed, Mar 15, 2017 at 9:50 PM, Maxim Dounin wrote: > >> Hello! >> >> On Wed, Mar 15, 2017 at 08:47:15PM +0900, Nishikubo Minoru wrote: >> >> > We want to limit outgoing(upstream) rate with the fixed string key among >> > various virtual hosts as follows: >> > >> > limit_req_zone fixedstring zone=upstream:1m rate=5000r/s; >> > >> > server { >> > server_name vhosta; >> > logation / { >> >> If this is an exact configuration you've tried to test with, >> you've probably tested something very different, as there is no >> "logation" directive in nginx. >> >> [...] >> >> > But on our test, the nginx server send to upstream server 6738 requests >> in >> > a second. >> > (The vhosta sent 3677 requests, and the vhostb sent 3061 requests) >> >> Note that testing exact number of requests in a particular second >> doesn't really make sense as small time difference in time will >> introduce large errors. Try measuring the average rate of >> requests for a larger period of time. >> >> > It seems that each virtual hosts limits 5000r/s and entire nginx will >> > 10000r/s. >> >> No, this is not how it works. >> >> > Anyway, we will set limit_req_log_level to info level. >> > Does anyone know limit_req detailed log information? >> >> Detailed information can be found in debug log, see >> http://nginx.org/en/docs/debugging_log.html. >> >> -- >> Maxim Dounin >> http://nginx.org/ >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ente.trompete at protonmail.com Wed Mar 15 17:00:07 2017 From: ente.trompete at protonmail.com (SW@EU) Date: Wed, 15 Mar 2017 13:00:07 -0400 Subject: Valid characters in nginx configuration Message-ID: Hi, there can I find the description of the nginx configuration file syntax e.g. in a BNF like notation. There is defined which characters are allowed in "name" e.g. of an upstream definition? Only ASCII or UTF8, only alpha or alphanumeric and if the last, must it start with alpha. Can I use special characters like "@"? IMO is this the basic information which should be on top of the documentation but either is does not exists or is very good stashed ;-) I'm not the first which ask this in the web: http://stackoverflow.com/questions/36485834/valid-characters-in-nginx-upstream-name, but there is no answer and it is from 10 years ago :-( Can anyone help me please? Maybe I use wrong search terms. TIA, SW Sent with [ProtonMail](https://protonmail.com) Secure Email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nohojason at gmail.com Wed Mar 15 23:45:15 2017 From: nohojason at gmail.com (Jason In North Hollywood) Date: Thu, 16 Mar 2017 07:45:15 +0800 Subject: Can't create forum account Message-ID: Hi everyone, I'm trying to create an account at forum.nginx.org and it keeps saying denied due to suspected spammer. I've tried from many different ISPs in a few countries (USA included), different email addresses (different domains) and nothing is working. I've tried having the captcha read to me as well. I've never had a problem signing up for an account like this.... Any ideas? Thanks J -------------- next part -------------- An HTML attachment was scrubbed... URL: From nohojason at gmail.com Thu Mar 16 02:39:19 2017 From: nohojason at gmail.com (Jason In North Hollywood) Date: Thu, 16 Mar 2017 10:39:19 +0800 Subject: Simple reverse proxy - 520 bad gateway Message-ID: Trying to do a simple proxy from sub.domain.com/link1 to another server on the LAN - 10.1.1.1:8080/someotherlink1. This is what my server context looks like: (I modified the default nginx.conf) server { listen 80 default_server; listen [::]:80 default_server; server_name sub.domain.com; root /; # Load configuration files for the default server block. include /etc/nginx/default.d/*.conf; location /link1 { proxy_pass http://10.1.1.1:8080/link2 } } ` but visiting a webpage is just loading the nginx 502 bad gateway page. Error in the log is: 2017/03/15 22:04:27 [crit] 8647#0: *11 connect() to 10.1.1.1:8080 failed (13: Permission denied) while connecting to upstream, client: 112.xxx.xxx.xxx, server: sub.domain.com, request: "GET /link1/ HTTP/1.1", upstream: "http://10.1.1.1.1:8080/link2/", host: "sub.domain.com" whats a bit strange looking is the GET /link1/ - as this this should not be the link in the final upstream URL - it should not be trying to get this link. What am I doing wrong? Thanks, Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Mar 16 03:48:11 2017 From: nginx-forum at forum.nginx.org (shiz) Date: Wed, 15 Mar 2017 23:48:11 -0400 Subject: server listen directive for IPV4 and IPV6 Message-ID: <852f717573fe4bacd0db2ab9d9e3c53f.NginxMailingListEnglish@forum.nginx.org> There is a lot of confusion in the answers I fount about it. When I installed nginx first, it was the debian jessie version 1.6.2 and the configuration to listen to both ipv4 and ipv6 was #server { # listen 80; # listen [::]:80; # # server_name example.com; # # root /var/www/example.com; # index index.html; # # location / { # try_files $uri $uri/ =404; # } #} Now I use nginx 1.11.10 and the example configuration file only has one line: listen 80; Should I update my configuration? I might be wrong but I did not see ipv6 requests for a long while. Last time I changed the listen directive following some recommendation found on the web, I ended up with servers not listening to anything! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272973,272973#msg-272973 From mdounin at mdounin.ru Thu Mar 16 13:06:07 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 16 Mar 2017 16:06:07 +0300 Subject: Simple reverse proxy - 520 bad gateway In-Reply-To: References: Message-ID: <20170316130607.GO13617@mdounin.ru> Hello! On Thu, Mar 16, 2017 at 10:39:19AM +0800, Jason In North Hollywood wrote: [...] > Error in the log is: > > 2017/03/15 22:04:27 [crit] 8647#0: *11 connect() to 10.1.1.1:8080 failed > (13: Permission denied) while connecting to upstream, client: > 112.xxx.xxx.xxx, server: sub.domain.com, request: "GET /link1/ HTTP/1.1", > upstream: "http://10.1.1.1.1:8080/link2/", host: "sub.domain.com" > > whats a bit strange looking is the GET /link1/ - as this this should not be > the link in the final upstream URL - it should not be trying to get this > link. The "request: ..." string in the error message is the original request as got from the client. It is to be used to identify the original request which caused the error. The upstream server and corresponding URI can be found in the "upstream: ..." string in the error message. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Mar 16 13:11:37 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 16 Mar 2017 16:11:37 +0300 Subject: server listen directive for IPV4 and IPV6 In-Reply-To: <852f717573fe4bacd0db2ab9d9e3c53f.NginxMailingListEnglish@forum.nginx.org> References: <852f717573fe4bacd0db2ab9d9e3c53f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170316131137.GP13617@mdounin.ru> Hello! On Wed, Mar 15, 2017 at 11:48:11PM -0400, shiz wrote: > There is a lot of confusion in the answers I fount about it. > > When I installed nginx first, it was the debian jessie version 1.6.2 and the > configuration to listen to both ipv4 and ipv6 was > > #server { > # listen 80; > # listen [::]:80; > # > # server_name example.com; > # > # root /var/www/example.com; > # index index.html; > # > # location / { > # try_files $uri $uri/ =404; > # } > #} > > Now I use nginx 1.11.10 and the example configuration file only has one > line: > listen 80; > > > Should I update my configuration? I might be wrong but I did not see ipv6 > requests for a long while. If you want nginx to listen on both IPv4 and IPv6, you have to use both listen 80; and listen [::]:80; in your configuration. With nginx running, you can use "netstat -nlt" or "ss -nlt" to find out which listening sockets are in fact open on your system. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Thu Mar 16 14:19:09 2017 From: nginx-forum at forum.nginx.org (shiz) Date: Thu, 16 Mar 2017 10:19:09 -0400 Subject: server listen directive for IPV4 and IPV6 In-Reply-To: <20170316131137.GP13617@mdounin.ru> References: <20170316131137.GP13617@mdounin.ru> Message-ID: <09802cd982dab513ae3788bbbd004304.NginxMailingListEnglish@forum.nginx.org> Excellent. Very grateful for the clarification! Maxim Dounin Wrote: ------------------------------------------------------- > > If you want nginx to listen on both IPv4 and IPv6, you have to use > both > > listen 80; > > and > > listen [::]:80; > > in your configuration. > > With nginx running, you can use "netstat -nlt" or "ss -nlt" to > find out which listening sockets are in fact open on your system. > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272973,272980#msg-272980 From nginx-forum at forum.nginx.org Thu Mar 16 22:37:49 2017 From: nginx-forum at forum.nginx.org (lchennup) Date: Thu, 16 Mar 2017 18:37:49 -0400 Subject: Can we replace HAProxy with NGINX Message-ID: <7f905789c9b4c2dab8e6bbdea17bbff5.NginxMailingListEnglish@forum.nginx.org> Hi, I have the following HAProxy configuration: global log 127.0.0.1 local1 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid --- --- tune.comp.maxlevel 5 defaults mode http log global option httplog ---------- -------- -------- maxconn 3000 # enable compression (haproxy v1.5-dev13 and above required) compression algo gzip compression type text/html application/javascript text/css application/x-javascript text/javascript application/json frontend http-in bind *:443 ssl crt /etc/ssl/example.com mode http http-request redirect scheme https if { hdr(Host) -i example.com } !{ ssl_fc } acl cluster1_dead nbsrv(service_servers) lt 1 http-request deny if cluster1_dead acl is_service path_beg -i /Module/service use_backend service_servers if is_service default_backend web_servers backend web_servers balance roundrobin option forwardfor cookie SERVERID insert secure httponly maxidle 60m maxlife 180m indirect server exampl.com_id example.com:8080 cookie example.com_name check port 8080 inter 1000 backend service_servers balance roundrobin option forwardfor cookie SERVERID insert secure httponly maxidle 60m maxlife 180m indirect reqrep ^([^\ :]*)\ /Module/service/(.*) \1\ /Module1/\2 server exampl.com_id example.com:8080 cookie example.com_name check port 8080 inter 1000 listen stats :1936 mode http stats enable stats hide-version stats realm Haproxy\ Statistics stats uri / stats auth haproxy:password Can the above is possible to do with NGINX?. Thanks, Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272994,272994#msg-272994 From peter.toth198 at gmail.com Thu Mar 16 22:46:46 2017 From: peter.toth198 at gmail.com (Peter Toth) Date: Fri, 17 Mar 2017 11:46:46 +1300 Subject: Illumos/SmartOS/Solaris issue when using eventport and keepalive at the same time Message-ID: Hi all, There is a lock-up issue on Solaris derived operating systems when using "keepalive" and "use eventport" configuration directives at the same time. The issue has been described previously back in 2015 and a patch was proposed here: https://forum.nginx.org/read.php?2,259798,259798#msg-259798 I have tested the patch and it is working. Could anyone comment on whether the patch looks acceptable in its current form? Any objections to applying this fix in the next release? Thanks P -------------- next part -------------- An HTML attachment was scrubbed... URL: From nohojason at gmail.com Fri Mar 17 01:34:17 2017 From: nohojason at gmail.com (Jason In North Hollywood) Date: Fri, 17 Mar 2017 09:34:17 +0800 Subject: Simple reverse proxy - 520 bad gateway In-Reply-To: <20170316130607.GO13617@mdounin.ru> References: <20170316130607.GO13617@mdounin.ru> Message-ID: Hi Maxim - Thanks! I found my error - even though I had SELinux on permissive, it was still blocking. I though permissive allowed, but with logging. BR On Thu, Mar 16, 2017 at 9:06 PM, Maxim Dounin wrote: > Hello! > > On Thu, Mar 16, 2017 at 10:39:19AM +0800, Jason In North Hollywood wrote: > > [...] > > > Error in the log is: > > > > 2017/03/15 22:04:27 [crit] 8647#0: *11 connect() to 10.1.1.1:8080 failed > > (13: Permission denied) while connecting to upstream, client: > > 112.xxx.xxx.xxx, server: sub.domain.com, request: "GET /link1/ > HTTP/1.1", > > upstream: "http://10.1.1.1.1:8080/link2/", host: "sub.domain.com" > > > > whats a bit strange looking is the GET /link1/ - as this this should not > be > > the link in the final upstream URL - it should not be trying to get this > > link. > > The "request: ..." string in the error message is the original > request as got from the client. It is to be used to identify the > original request which caused the error. > > The upstream server and corresponding URI can be found in the > "upstream: ..." string in the error message. > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nohojason at gmail.com Fri Mar 17 09:02:54 2017 From: nohojason at gmail.com (Jason In North Hollywood) Date: Fri, 17 Mar 2017 17:02:54 +0800 Subject: Forum signup process broken - who admins the forum? Message-ID: Hi, I absolutely can not sign up at the forum.nginx.org site. I've tried from many places, my work (a major company) - HK and USA, but no luck. Says I'm a spammer every time. Never had this problem before. Is the forum signup process broken? Can you pass this along to them? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From kapekto1 at gmail.com Fri Mar 17 11:12:40 2017 From: kapekto1 at gmail.com (Tomasz Kapek) Date: Fri, 17 Mar 2017 11:12:40 +0000 Subject: Change target host in proxy_pass Message-ID: Hello, I have NGINX acting as reverse proxy and I would like to achieve something like this: When I get a request like this GET http://app1.mydomain.aa.com/aaa/bbb it should be converted to: GET http://app1.mydomain.bb.com/aaa/bbb so such directive will do the job: proxy_pass http://app1.mydomain.bb.com; problem is that I want to convert host part automatically (regex) basing on incoming requests to NGINX - app1.mydomain are not fixed they are changing very often. Is it possible? Can anyone get a clue how proxy_pass statement should look like? -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucas at lucasrolff.com Fri Mar 17 11:47:21 2017 From: lucas at lucasrolff.com (Lucas Rolff) Date: Fri, 17 Mar 2017 11:47:21 +0000 Subject: Change target host in proxy_pass In-Reply-To: References: Message-ID: <273B472C-C1E6-415E-B490-21E572400CD7@lucasrolff.com> You can proxy_set_header Host ? that should override whatever is defined in proxy_pass From: nginx > on behalf of Tomasz Kapek > Reply-To: "nginx at nginx.org" > Date: Friday, 17 March 2017 at 12.12 To: "nginx at nginx.org" > Subject: Change target host in proxy_pass Hello, I have NGINX acting as reverse proxy and I would like to achieve something like this: When I get a request like this GET http://app1.mydomain.aa.com/aaa/bbb it should be converted to: GET http://app1.mydomain.bb.com/aaa/bbb so such directive will do the job: proxy_pass http://app1.mydomain.bb.com; problem is that I want to convert host part automatically (regex) basing on incoming requests to NGINX - app1.mydomain are not fixed they are changing very often. Is it possible? Can anyone get a clue how proxy_pass statement should look like? -------------- next part -------------- An HTML attachment was scrubbed... URL: From al-nginx at none.at Fri Mar 17 12:53:40 2017 From: al-nginx at none.at (Aleksandar Lazic) Date: Fri, 17 Mar 2017 13:53:40 +0100 Subject: Valid characters in nginx configuration In-Reply-To: References: Message-ID: Hi. Am 15-03-2017 18:00, schrieb SW at EU via nginx: > Hi, > > there can I find the description of the nginx configuration file syntax > e.g. in a BNF like notation. There is defined which characters are > allowed in "name" e.g. of an upstream definition? Only ASCII or UTF8, > only alpha or alphanumeric and if the last, must it start with alpha. > Can I use special characters like "@"? IMO is this the basic > information which should be on top of the documentation but either is > does not exists or is very good stashed ;-) > > I'm not the first which ask this in the web: > http://stackoverflow.com/questions/36485834/valid-characters-in-nginx-upstream-name, > but there is no answer and it is from 10 years ago :-( > > Can anyone help me please? Maybe I use wrong search terms. The name in upstream source is defined as ngx_str_t and ngx_str_t is defined here http://hg.nginx.org/nginx/file/tip/src/core/ngx_string.h#l16 as u_char. You can try different charakters and see if it works. Cheers Aleks From mdounin at mdounin.ru Fri Mar 17 13:09:31 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 17 Mar 2017 16:09:31 +0300 Subject: Illumos/SmartOS/Solaris issue when using eventport and keepalive at the same time In-Reply-To: References: Message-ID: <20170317130931.GT13617@mdounin.ru> Hello! On Fri, Mar 17, 2017 at 11:46:46AM +1300, Peter Toth wrote: > There is a lock-up issue on Solaris derived operating systems when using > "keepalive" and "use eventport" configuration directives at the same time. > > The issue has been described previously back in 2015 and a patch was > proposed here: https://forum.nginx.org/read.php?2,259798,259798#msg-259798 > > I have tested the patch and it is working. > > Could anyone comment on whether the patch looks acceptable in its current > form? > Any objections to applying this fix in the next release? There are problems with eventport implementation, known for at least several years now, and these can be easily reproduced by running our test suite with 'use eventport' added to test configurations. If I recall correctly, upstream keepalive is not something important to trigger problems, any connection to an upstream is basically enough. Unfortunately, these problems are very low priority due to minor popularity of Solaris. Consider using /dev/poll instead, which is the default, well tested and has no known problems. The patch in question looks more like a hack to mitigate a particular issue the author was facing. It doesn't resolve the underlying problems though, and likely there are other issues as well. There are no plans to commit this patch. -- Maxim Dounin http://nginx.org/ From francis at daoine.org Fri Mar 17 14:51:44 2017 From: francis at daoine.org (Francis Daly) Date: Fri, 17 Mar 2017 14:51:44 +0000 Subject: proxy_bind with hostname from /etc/hosts possible? In-Reply-To: References: Message-ID: <20170317145144.GM15209@daoine.org> On Mon, Mar 13, 2017 at 10:38:12AM -0400, larsg wrote: Hi there, > is it possible to use an hostname from local /etc/hosts as proxy_bind > value? http://nginx.org/r/proxy_bind says that its argument is an address. So I'm going to say "no". > You can see our current configuration above. > Unfortunately nginx cannot resolve the hostname (localip0 etc.). There is an > error log "invalid local address "localip0"...). If you dive into the source code, you'll see that that error message happens when a call to "ngx_parse_addr_port()" fails; and that function does what its name suggests. > I'm worry that I only can use explicit IP addresses in this situation. Or do > you have an alternative solution? I think you'll need to use IP addresses. An option (untested) could be to put the split_clients call into an external file which you "include" in your common nginx.conf, and let *that* file be generated unique per host. And another option could be to have a common nginx-conf-precursor which is distributed to all hosts, and then run a pre-processor of your choice against it to create the individual unique nginx.conf files. Good luck with it, f -- Francis Daly francis at daoine.org From wakkas.rafiq at oracle.com Fri Mar 17 15:08:42 2017 From: wakkas.rafiq at oracle.com (Wakkas Rafiq) Date: Fri, 17 Mar 2017 15:08:42 +0000 (UTC) Subject: Using nginx as proxy Message-ID: <53C85CB8A3F38934.B05EB191-0A7F-42BD-86B4-822F38475C85@mail.outlook.com> Hi all I am trying to setup a simple confit where tcp traffic coming in at specific port - 12000 need to be send to a specific server:3260 In this case source ip will change (which is fine) but we are seeing on tcpdump that source port is changing from 12000 to some way higher value The server rejecting the call due to that How do I setup so the source port remain unchanged? I will send my config once at work - if needed Thanks Get Outlook for iOS -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at ohlste.in Fri Mar 17 15:32:33 2017 From: jim at ohlste.in (Jim Ohlstein) Date: Fri, 17 Mar 2017 11:32:33 -0400 Subject: Forum signup process broken - who admins the forum? In-Reply-To: References: Message-ID: <03D24076-8953-484D-BE55-56FB476446B2@ohlste.in> Please contact me off-list. Jim Ohlstein > On Mar 17, 2017, at 5:02 AM, Jason In North Hollywood wrote: > > Hi, > > I absolutely can not sign up at the forum.nginx.org site. I've tried from many places, my work (a major company) - HK and USA, but no luck. Says I'm a spammer every time. > > Never had this problem before. > > Is the forum signup process broken? > > Can you pass this along to them? > > Thanks > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From wakkas.rafiq at oracle.com Fri Mar 17 15:54:25 2017 From: wakkas.rafiq at oracle.com (Wakkas Rafiq) Date: Fri, 17 Mar 2017 08:54:25 -0700 Subject: Using nginx as proxy In-Reply-To: <0CF9F4D3-2BC6-4EC2-95B4-EF9F3E3F77E5@oracle.com> References: <0CF9F4D3-2BC6-4EC2-95B4-EF9F3E3F77E5@oracle.com> Message-ID: <31D1C481-7992-4FA3-8522-0A687C4F4226@oracle.com> Tried server { ??? listen 169.254.2.2:12000; ??? allow 169.254.169.254; ??? deny all; ??? proxy_pass 10.0.52.151:3260; } then when saw source port changing from 12000. Tried adding following but no luck: proxy_bind 169.254.169.254:12000; proxy_bind 127.0.0.1:12000; and???????? proxy_bind $remote_addr:12000; From: nginx on behalf of Wakkas Rafiq Reply-To: Date: Friday, March 17, 2017 at 8:08 AM To: Subject: Using nginx as proxy Hi all I am trying to setup a simple config where tcp traffic coming in at specific port - 12000 need to be send to a specific server:3260 In this case source ip will change (which is fine) but we are seeing on tcpdump that source port is changing from 12000 to some way higher value The server rejecting the call due to that How do I setup so the source port remain unchanged? I will send my config once at work - if needed Thanks Get Outlook for iOS _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Mar 17 16:00:09 2017 From: nginx-forum at forum.nginx.org (larsg) Date: Fri, 17 Mar 2017 12:00:09 -0400 Subject: proxy_bind with hostname from /etc/hosts possible? In-Reply-To: <20170317145144.GM15209@daoine.org> References: <20170317145144.GM15209@daoine.org> Message-ID: <153a650dc432b4c5376056a93b99bff2.NginxMailingListEnglish@forum.nginx.org> thanks for the reply. indeed, we are generating the split_clients directive on the host it's running. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272918,273009#msg-273009 From rainer at ultra-secure.de Fri Mar 17 16:04:06 2017 From: rainer at ultra-secure.de (rainer at ultra-secure.de) Date: Fri, 17 Mar 2017 17:04:06 +0100 Subject: Using nginx as proxy In-Reply-To: <31D1C481-7992-4FA3-8522-0A687C4F4226@oracle.com> References: <0CF9F4D3-2BC6-4EC2-95B4-EF9F3E3F77E5@oracle.com> <31D1C481-7992-4FA3-8522-0A687C4F4226@oracle.com> Message-ID: <7f6c0bb36081dfd48871605866a9b1d4@ultra-secure.de> Maybe something like if ($host = '') { set $relhost $server_addr; } proxy_set_header Host $relhost:3260; proxy_redirect https://$relhost:3260/ https://$relhost:12000/; Which is what was at least once needed to proxy the Zimbra admin interface that insisted on being called on port 7071. Rainer Am 2017-03-17 16:54, schrieb Wakkas Rafiq: > Tried > > server { > > listen 169.254.2.2:12000; > > allow 169.254.169.254; > > deny all; > > proxy_pass 10.0.52.151:3260; > > } > > then when saw source port changing from 12000. Tried adding following > but no luck: > > proxy_bind 169.254.169.254:12000; > > proxy_bind 127.0.0.1:12000; > > and proxy_bind $remote_addr:12000; > > FROM: nginx on behalf of Wakkas Rafiq > > REPLY-TO: > DATE: Friday, March 17, 2017 at 8:08 AM > TO: > SUBJECT: Using nginx as proxy > > Hi all > > I am trying to setup a simple config where tcp traffic coming in at > specific port - 12000 need to be send to a specific server:3260 > > In this case source ip will change (which is fine) but we are seeing > on tcpdump that source port is changing from 12000 to some way higher > value > > The server rejecting the call due to that > > How do I setup so the source port remain unchanged? > > I will send my config once at work - if needed > > Thanks > > Get Outlook for iOS [1] > > _______________________________________________ nginx mailing list > nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx > > Links: > ------ > [1] https://aka.ms/o0ukef > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From wakkas.rafiq at oracle.com Fri Mar 17 17:13:48 2017 From: wakkas.rafiq at oracle.com (Wakkas Rafiq) Date: Fri, 17 Mar 2017 10:13:48 -0700 Subject: Using nginx as proxy In-Reply-To: References: <0CF9F4D3-2BC6-4EC2-95B4-EF9F3E3F77E5@oracle.com> <31D1C481-7992-4FA3-8522-0A687C4F4226@oracle.com> Message-ID: <259EE0DE-391E-4F2F-86F4-137901EC99FA@oracle.com> Thanks Rainer But trying to direct tcp traffic ? so below http/https based will not help Wonder if nginx can handle proxing non http ? tcp traffic thanks On 3/17/17, 9:04 AM, "rainer at ultra-secure.de" wrote: Maybe something like if ($host = '') { set $relhost $server_addr; } proxy_set_header Host $relhost:3260; proxy_redirect https://$relhost:3260/ https://$relhost:12000/; Which is what was at least once needed to proxy the Zimbra admin interface that insisted on being called on port 7071. Rainer Am 2017-03-17 16:54, schrieb Wakkas Rafiq: > Tried > > server { > > listen 169.254.2.2:12000; > > allow 169.254.169.254; > > deny all; > > proxy_pass 10.0.52.151:3260; > > } > > then when saw source port changing from 12000. Tried adding following > but no luck: > > proxy_bind 169.254.169.254:12000; > > proxy_bind 127.0.0.1:12000; > > and proxy_bind $remote_addr:12000; > > FROM: nginx on behalf of Wakkas Rafiq > > REPLY-TO: > DATE: Friday, March 17, 2017 at 8:08 AM > TO: > SUBJECT: Using nginx as proxy > > Hi all > > I am trying to setup a simple config where tcp traffic coming in at > specific port - 12000 need to be send to a specific server:3260 > > In this case source ip will change (which is fine) but we are seeing > on tcpdump that source port is changing from 12000 to some way higher > value > > The server rejecting the call due to that > > How do I setup so the source port remain unchanged? > > I will send my config once at work - if needed > > Thanks > > Get Outlook for iOS [1] > > _______________________________________________ nginx mailing list > nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx > > Links: > ------ > [1] https://aka.ms/o0ukef > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nelsonmarcos at gmail.com Fri Mar 17 17:21:55 2017 From: nelsonmarcos at gmail.com (Nelson Marcos) Date: Fri, 17 Mar 2017 14:21:55 -0300 Subject: proxy_buffering off disable cache ? Message-ID: Hello all, Right now we're using Nginx(as a proxy) to serve videos(4gb) and small objects(< 100k), using Openstack Swift as backend. Yesterday, I tried turn proxy_buffering off to see if it would improve nginx performance but it didn't. However, I realised that nginx stoped to write new files on proxy_temp_path and proxy_cache_path until I turn proxy_buffering off. I have two question: 1) Is that a expected behaviour? If I turn proxy_buffering off, nginx will disable cache and temp files? 2) If proxy_buffering is on, shouldn't nginx recieve the entire response, before send it to the cliente? Or does it wait only enough to fill the configured buffers? Regards, NM -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Fri Mar 17 17:28:46 2017 From: francis at daoine.org (Francis Daly) Date: Fri, 17 Mar 2017 17:28:46 +0000 Subject: Using nginx as proxy In-Reply-To: <259EE0DE-391E-4F2F-86F4-137901EC99FA@oracle.com> References: <0CF9F4D3-2BC6-4EC2-95B4-EF9F3E3F77E5@oracle.com> <31D1C481-7992-4FA3-8522-0A687C4F4226@oracle.com> <259EE0DE-391E-4F2F-86F4-137901EC99FA@oracle.com> Message-ID: <20170317172846.GO15209@daoine.org> On Fri, Mar 17, 2017 at 10:13:48AM -0700, Wakkas Rafiq wrote: Hi there, > Wonder if nginx can handle proxing non http ? tcp traffic It can; but generally the source port for a tcp connection does not matter. The nginx stream module has no way (that I know of) to set the source port of the tcp connection that it makes to its upstream. (For example: if you had two sessions that both wanted to use source port 12000 to the same destination server port 3260, I'm pretty sure that something would go wrong.) I suspect it may be simpler to find out why the upstream server cares about the source port of the incoming connection, and see if it can be changed not to; that to try to configure any generic tcp-forwarder to use a specific source port for all outgoing connections. You may have more luck with a dedicated tcp-forwarder that knows it can only handle a single connection at once. Cheers, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Fri Mar 17 17:46:24 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 17 Mar 2017 20:46:24 +0300 Subject: proxy_buffering off disable cache ? In-Reply-To: References: Message-ID: <20170317174624.GX13617@mdounin.ru> Hello! On Fri, Mar 17, 2017 at 02:21:55PM -0300, Nelson Marcos wrote: > Hello all, > > Right now we're using Nginx(as a proxy) to serve videos(4gb) and small > objects(< 100k), using Openstack Swift as backend. > > Yesterday, I tried turn proxy_buffering off to see if it would improve > nginx performance but it didn't. > > However, I realised that nginx stoped to write new files on proxy_temp_path > and proxy_cache_path until I turn proxy_buffering off. > > > I have two question: > > 1) Is that a expected behaviour? If I turn proxy_buffering off, nginx will > disable cache and temp files? Yes. > 2) If proxy_buffering is on, shouldn't nginx recieve the entire response, > before send it to the cliente? Or does it wait only enough to fill the > configured buffers? No, it only waits for one buffer to be filled. -- Maxim Dounin http://nginx.org/ From nelsonmarcos at gmail.com Sat Mar 18 00:26:50 2017 From: nelsonmarcos at gmail.com (Nelson Marcos) Date: Fri, 17 Mar 2017 21:26:50 -0300 Subject: proxy_buffering off disable cache ? In-Reply-To: <20170317174624.GX13617@mdounin.ru> References: <20170317174624.GX13617@mdounin.ru> Message-ID: Thanks \o/ Um abra?o, NM 2017-03-17 14:46 GMT-03:00 Maxim Dounin : > Hello! > > On Fri, Mar 17, 2017 at 02:21:55PM -0300, Nelson Marcos wrote: > > > Hello all, > > > > Right now we're using Nginx(as a proxy) to serve videos(4gb) and small > > objects(< 100k), using Openstack Swift as backend. > > > > Yesterday, I tried turn proxy_buffering off to see if it would improve > > nginx performance but it didn't. > > > > However, I realised that nginx stoped to write new files on > proxy_temp_path > > and proxy_cache_path until I turn proxy_buffering off. > > > > > > I have two question: > > > > 1) Is that a expected behaviour? If I turn proxy_buffering off, nginx > will > > disable cache and temp files? > > Yes. > > > 2) If proxy_buffering is on, shouldn't nginx recieve the entire > response, > > before send it to the cliente? Or does it wait only enough to fill the > > configured buffers? > > No, it only waits for one buffer to be filled. > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter.toth198 at gmail.com Sat Mar 18 08:05:07 2017 From: peter.toth198 at gmail.com (Peter Toth) Date: Sat, 18 Mar 2017 21:05:07 +1300 Subject: Illumos/SmartOS/Solaris issue when using eventport and keepalive at the same time In-Reply-To: <20170317130931.GT13617@mdounin.ru> References: <20170317130931.GT13617@mdounin.ru> Message-ID: Great, thanks for clarifying this. On Sat, Mar 18, 2017 at 2:09 AM, Maxim Dounin wrote: > Hello! > > On Fri, Mar 17, 2017 at 11:46:46AM +1300, Peter Toth wrote: > > > There is a lock-up issue on Solaris derived operating systems when using > > "keepalive" and "use eventport" configuration directives at the same > time. > > > > The issue has been described previously back in 2015 and a patch was > > proposed here: https://forum.nginx.org/read. > php?2,259798,259798#msg-259798 > > > > I have tested the patch and it is working. > > > > Could anyone comment on whether the patch looks acceptable in its current > > form? > > Any objections to applying this fix in the next release? > > There are problems with eventport implementation, known for at > least several years now, and these can be easily reproduced by > running our test suite with 'use eventport' added to test > configurations. If I recall correctly, upstream keepalive is not > something important to trigger problems, any connection to an > upstream is basically enough. > > Unfortunately, these problems are very low priority due to > minor popularity of Solaris. Consider using /dev/poll instead, > which is the default, well tested and has no known problems. > > The patch in question looks more like a hack to mitigate a > particular issue the author was facing. It doesn't resolve the > underlying problems though, and likely there are other issues as > well. There are no plans to commit this patch. > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From list at xdrv.co.uk Sat Mar 18 11:14:03 2017 From: list at xdrv.co.uk (James) Date: Sat, 18 Mar 2017 11:14:03 +0000 Subject: Illumos/SmartOS/Solaris issue when using eventport and keepalive at the same time In-Reply-To: <20170317130931.GT13617@mdounin.ru> References: <20170317130931.GT13617@mdounin.ru> Message-ID: <821683c1-2489-dc60-3de3-79223b8dda2c@xdrv.co.uk> On 17/03/2017 13:09, Maxim Dounin wrote: Hello! > There are problems with eventport implementation, known for at > least several years now, and these can be easily reproduced by > running our test suite with 'use eventport' added to test > configurations. If I recall correctly, upstream keepalive is not > something important to trigger problems, any connection to an > upstream is basically enough. > > Unfortunately, these problems are very low priority due to > minor popularity of Solaris. Consider using /dev/poll instead, > which is the default, well tested and has no known problems. I accept all that but learnt the hard way myself. Perhaps the docs could not recommend eventport with Solaris and warn against its use for now. eg: http://nginx.org/en/docs/events.html ???????! James. From acgtek at yahoo.com Sun Mar 19 03:03:57 2017 From: acgtek at yahoo.com (Jun Chen) Date: Sun, 19 Mar 2017 03:03:57 +0000 (UTC) Subject: Error with nginx reverse proxy setup References: <1532700441.2979767.1489892637433.ref@mail.yahoo.com> Message-ID: <1532700441.2979767.1489892637433@mail.yahoo.com> Hi All, I am setting my first reverse proxy by following online posts. The problem is that when I type the http://my_ip_address/my_rev and it returns an 404 error: Not Found The requested URL was not found on the server.If you entered the URL manually please check your spelling and try again. Here is what I did: 1. installed nginx 1.10.0 on ubuntu 16.042. created file my_nx.conf under /etc/sites-available with following: ????server { ??????????? listen 80; ??????????? server_name my_ip_address; ? ??????????? location /my_rev { ??????????????????? proxy_pass http://192.168.1.65:5000; ??????????????????? include /etc/nginx/proxy_params; ??????????? } ????}3. Under /etc/sites-enabled, a symlink my_nx.conf was generated pointing to /etc/sites-available/my_nx.conf4. restart nginx 5. On browser, type http://my_ip_address/my_rev and, the error The configuration seems very straightforward. Where have I missed? Many thanks. -Jun C -------------- next part -------------- An HTML attachment was scrubbed... URL: From anoopalias01 at gmail.com Sun Mar 19 03:54:31 2017 From: anoopalias01 at gmail.com (Anoop Alias) Date: Sun, 19 Mar 2017 09:24:31 +0530 Subject: Error with nginx reverse proxy setup In-Reply-To: <1532700441.2979767.1489892637433@mail.yahoo.com> References: <1532700441.2979767.1489892637433.ref@mail.yahoo.com> <1532700441.2979767.1489892637433@mail.yahoo.com> Message-ID: Are you sure you don't have a default vhost? Try adding a server name and input the server name in browser so you are sure you are hitting the correct server {} in the config. Or if you want to use IP ensure the vhost you add is the default. On 19-Mar-2017 8:35 AM, "Jun Chen via nginx" wrote: > Hi All, > > I am setting my first reverse proxy by following online posts. The problem > is that when I type the http://my_ip_address/my_rev and it returns an 404 > error: > > Not Found > The requested URL was not found on the server. > If you entered the URL manually please check your spelling and try again. > > Here is what I did: > > 1. installed nginx 1.10.0 on ubuntu 16.04 > 2. created file my_nx.conf under /etc/sites-available with following: > > server { > listen 80; > server_name my_ip_address; > > location /my_rev { > proxy_pass http://192.168.1.65:5000; > include /etc/nginx/proxy_params; > } > } > 3. Under /etc/sites-enabled, a symlink my_nx.conf was generated pointing > to /etc/sites-available/my_nx.conf > 4. restart nginx > 5. On browser, type http://my_ip_address/my_rev and, the error > > The configuration seems very straightforward. Where have I missed? Many > thanks. > > -Jun C > > > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sun Mar 19 13:55:16 2017 From: nginx-forum at forum.nginx.org (gil.yoder) Date: Sun, 19 Mar 2017 09:55:16 -0400 Subject: Trouble Integrating Moodle on Nginx Message-ID: I am trying to install Moodle under Nginx but I have run into a snag that's preventing me from completing the install. I have NginX working under Ubuntu 16.04. PHP is installed at least well enough to run info.php from the root directory. The Moodle browser install works correctly up until the point that it tries to configure the main administrator account. Chrome developer tools shows that a lot of errors are occurring when the page loads. All of the errors are for resources that are requested through PHP pages. I think the problem may have to do with how I have fast_cgi set up, but I don't know what exactly is wrong. When I run nginx -T I get the following output at the bottom of this note. Can anyone see a problem with the setting that would explain my problem? Gil Yoder nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful # configuration file /usr/local/nginx/conf/nginx.conf: worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; include /usr/local/nginx/conf/sites_enabled/*.conf; } rtmp { server { listen 1935; chunk_size 4096; application live { live on; record off; exec ffmpeg -i rtmp://localhost/live/$name -threads 1 -c:v libx264 -profile:v baseline -b:v 350K -s 640x360 -f flv -c:a aac -ac 1 -strict -2 -b:a 56k rtmp://localhost/live360p/$name; } application live360p { live on; record off; } } } # configuration file /usr/local/nginx/conf/mime.types: types { text/html html htm shtml; text/css css; text/xml xml; image/gif gif; image/jpeg jpeg jpg; application/javascript js; application/atom+xml atom; application/rss+xml rss; text/mathml mml; text/plain txt; text/vnd.sun.j2me.app-descriptor jad; text/vnd.wap.wml wml; text/x-component htc; image/png png; image/tiff tif tiff; image/vnd.wap.wbmp wbmp; image/x-icon ico; image/x-jng jng; image/x-ms-bmp bmp; image/svg+xml svg svgz; image/webp webp; application/font-woff woff; application/java-archive jar war ear; application/json json; application/mac-binhex40 hqx; application/msword doc; application/pdf pdf; application/postscript ps eps ai; application/rtf rtf; application/vnd.apple.mpegurl m3u8; application/vnd.ms-excel xls; application/vnd.ms-fontobject eot; application/vnd.ms-powerpoint ppt; application/vnd.wap.wmlc wmlc; application/vnd.google-earth.kml+xml kml; application/vnd.google-earth.kmz kmz; application/x-7z-compressed 7z; application/x-cocoa cco; application/x-java-archive-diff jardiff; application/x-java-jnlp-file jnlp; application/x-makeself run; application/x-perl pl pm; application/x-pilot prc pdb; application/x-rar-compressed rar; application/x-redhat-package-manager rpm; application/x-sea sea; application/x-shockwave-flash swf; application/x-stuffit sit; application/x-tcl tcl tk; application/x-x509-ca-cert der pem crt; application/x-xpinstall xpi; application/xhtml+xml xhtml; application/xspf+xml xspf; application/zip zip; application/octet-stream bin exe dll; application/octet-stream deb; application/octet-stream dmg; application/octet-stream iso img; application/octet-stream msi msp msm; application/vnd.openxmlformats-officedocument.wordprocessingml.document docx; application/vnd.openxmlformats-officedocument.spreadsheetml.sheet xlsx; application/vnd.openxmlformats-officedocument.presentationml.presentation pptx; audio/midi mid midi kar; audio/mpeg mp3; audio/ogg ogg; audio/x-m4a m4a; audio/x-realaudio ra; video/3gpp 3gpp 3gp; video/mp2t ts; video/mp4 mp4; video/mpeg mpeg mpg; video/quicktime mov; video/webm webm; video/x-flv flv; video/x-m4v m4v; video/x-mng mng; video/x-ms-asf asx asf; video/x-ms-wmv wmv; video/x-msvideo avi; } # configuration file /usr/local/nginx/conf/sites_enabled/default.conf: server { listen 80; server_name 192.168.1.27; root /var/www/html; index index.php index.html index.html; location / { try_files $uri $uri/ =404; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } location ~ \.php$ { include snippets/fastcgi-php.conf; fastcgi_pass 127.0.0.1:9000; } location ~ /\.ht { deny all; } } # configuration file /usr/local/nginx/conf/snippets/fastcgi-php.conf: # regex to split $uri to $fastcgi_script_name and $fastcgi_path fastcgi_split_path_info ^(.+\.php)(/.+)$; # Check that the PHP script exists before passing it try_files $fastcgi_script_name =404; # Bypass the fact that try_files resets $fastcgi_path_info # see: http://trac.nginx.org/nginx/ticket/321 set $path_info $fastcgi_path_info; fastcgi_param PATH_INFO $path_info; fastcgi_index index.php; include fastcgi.conf; # configuration file /usr/local/nginx/conf/fastcgi.conf: fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param REQUEST_SCHEME $scheme; fastcgi_param HTTPS $https if_not_empty; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273026,273026#msg-273026 From nginx-forum at forum.nginx.org Mon Mar 20 00:48:57 2017 From: nginx-forum at forum.nginx.org (gil.yoder) Date: Sun, 19 Mar 2017 20:48:57 -0400 Subject: Trouble Integrating Moodle on Nginx In-Reply-To: References: Message-ID: <4277b25774a6feaaaa3207fd656d2d6e.NginxMailingListEnglish@forum.nginx.org> I was able to work this out by reviewing some of the instructions I was using to build out this server. I found a configuration line that I had missed for splitting the PHP url and rewriting it to to a standard query format. Once that was done and Nginx was reloaded Moodle was able to complete the Admin configuration correctly and everything worked fine. Gil Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273026,273030#msg-273030 From nginx-forum at forum.nginx.org Mon Mar 20 03:43:16 2017 From: nginx-forum at forum.nginx.org (Rodrigu Glorimar) Date: Sun, 19 Mar 2017 23:43:16 -0400 Subject: Nike is celebrating the actual 30th anniversary with the Air Max 1 so that Message-ID: However the AIR technology appeared to be announced and launched from the 1978 Air Atmosphere style in 1978. the fresh airbag became deafening and designed common appearance - Nike ONE PARTICULAR "Witch" tinker Hartfield have what few people will never dare to think. a sample [url=http://www.nike90online.com/Air-Max-Tailwind-5-c-08]nike air max tailwind 5[/url] shoe has changed the concept of the previous forever, shoes "hole". Inside 1987, it was not until 1987 -- nine years later throughout the magic hand with Tuckardfield, visible Max Air cushion. even contemptuously said, Nike is definitely 1. although experienced some ups and downs, with all the technical changes and much more advanced. but people are always eager to obtain the original worth, 30 years later on. this time the look team has been focusing on the Nike COUPLE OF color scheme homework and indifferent to A lot of years have eroded this details, If versatility and [url=http://www.nike90online.com/Air-Max-Excellerate-2-c-10/Air-Max-Excellerate-2-Mens-c-28]nike air max excellerate 2 mens[/url] simplicity is your current thing then this colorway on the Nike Air Maximum Plus may attract you, but the record and story in the first two colour scheme behind the particular always thirsty burning lovers giay. laces, and also midsole, Nike Air Utmost 1 receives that wishes of generations belonging to the world is undeniable, California surf type Stussy partnered with Nike to make two limited versions of the Air Huarache. a lot more established names including Nike, The largest gold age from the Air Force was the late 90s and also early 21st century. as the viewers for sneakers ended up being much smaller inside the late 1990s along with early 2000s. The team at atmos extra an unmistakable ?Safari? motif to Tinker Hatfield?s now-iconic design and style, 04/03 and 11/03 in the mail will be re-released Atmosphere Max 1 OG, the actual silhouette has [url=http://www.nike90online.com/Nike-Air-Max-90-c-03/Nike-Air-Max-90-Mens-c-14]nike air max 90 mens cheap[/url] seen several produces, inner liner, " the Swoosh is additionally giving fans total tonal iterations of the Air Max ONE PARTICULAR, White tongue tags add a certain amount of contrast to each one shoe. In 2000.. This partnership was years in front of its time, and created an essential precedent that opened space for smaller, independent brands to cooperate with bigger, Covered in black throughout most shoe. ?Collaboration? was definately not the enticing, marketing buzzword so it is today. this Air Maximum Plus is subsequently hit with Hip Grey and Wolf Grey [url=http://www.nike90online.com/Air-Max-Excellerate-2-c-10/Air-Max-Excellerate-2-Womens-c-27]nike air max excellerate 2 womens[/url] on the wave pattern on the upper, After Stussy got the ball coming, Japanese retailer atmos joined up when using the Swoosh in 2002 to create a unique version with the classic Air Utmost 1, tongue. which is today considered one of the big versions of the environment Max 1 to see, Nike is celebrating the actual 30th anniversary with the Air Max 1 so that as this year's Air flow Max Day techniques. leaving the Atmosphere Max 1 provides lost its budget position, While the red and blue colorways have already dropped, and this weekend produces the Atmos x Nike Air Maximum 1 "Elephant Screen-print, A black leather mudguard as well as a splash of white for the midsole wrap things up on this Air Max Plus that could be worn with convenience. Coming in "Sport Blue" as well as "Sport Red" colorways. mid-air Max 1 Advanced "Tonal" Pack includes combination [url=http://www.nike90online.com/Nike-Air-Max-90-c-03/Nike-Air-Max-90-Womens-c-13]nike air max 90 womens white[/url] of mesh and Nubuck for the upper, Fans of the actual Nike Air Max Plus could cop a new colorway of the retro silhouette this specific Spring as Nike unveils the shoe from a new black/grey presenting. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273031,273031#msg-273031 From nginx-forum at forum.nginx.org Mon Mar 20 12:11:06 2017 From: nginx-forum at forum.nginx.org (polder_trash) Date: Mon, 20 Mar 2017 08:11:06 -0400 Subject: Balancing NGINX reverse proxy In-Reply-To: <1cc3f5a0-a427-6649-6240-35dfe9312801@greengecko.co.nz> References: <1cc3f5a0-a427-6649-6240-35dfe9312801@greengecko.co.nz> Message-ID: <679c1d63b7b9cecf26196dad5ccb4c6e.NginxMailingListEnglish@forum.nginx.org> Thanks for you answer. It turned out a monitoring system was doing DNS lookups in the mean time, therefore it appeared that 2 client got the same DNS response. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272713,273041#msg-273041 From igal at lucee.org Mon Mar 20 19:32:00 2017 From: igal at lucee.org (Igal @ Lucee.org) Date: Mon, 20 Mar 2017 12:32:00 -0700 Subject: http/2 for Windows Message-ID: Hi, Is there a technical reason why the Windows binaries do not include the http/2 option? I understand that it's advised to use Linux, but that's not always an option. Thanks, Igal Sapir Lucee Core Developer Lucee.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Mar 20 20:46:28 2017 From: nginx-forum at forum.nginx.org (mgmuhilan) Date: Mon, 20 Mar 2017 16:46:28 -0400 Subject: Multitenant invalid configuration reload Message-ID: <9fee24970ff4869eb09e1a1686a749b8.NginxMailingListEnglish@forum.nginx.org> Hello Everyone, I had asked this question in stackoverflow (http://stackoverflow.com/questions/42771887/invalid-host-in-upstream-for-multitenant) earlier, but I did not get any response so far, so I wanted to check here.For now, I have a POC which generates tenant config (one per tenant) in a temp destination and then validates using "nginx -t "option and then finally all the good tenant configs are copied to the actual destination and then reloaded. Is there a simpler solution for the problem mentioned in the stackoverflow thread ? Thanks in advance. Regards, Muhilan Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273061,273061#msg-273061 From igal at lucee.org Tue Mar 21 06:40:10 2017 From: igal at lucee.org (Igal @ Lucee.org) Date: Mon, 20 Mar 2017 23:40:10 -0700 Subject: Tomcat EOFException when nginx is Reverse Proxy for WebSockets In-Reply-To: References: Message-ID: I am getting an EOFException after a couple of minutes when connecting via nginx that acts as reverse proxy. I am running Windows 2008R2 server, with Tomcat 8.5.12, and nginx 1.11.10 When connecting directly to Tomcat I do not see this issue, but direct connection is locally (i.e. network connection is not a factor) while connection via nginx goes over the internet. My nginx config is as follows: ## WebSocket begin location /ws/ { proxy_pass http://lucee_servers_81xx; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } ## WebSocket end Java Stack Trace: java.io.EOFException org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper.fillReadBuffer(NioEndpoint.java:1228) org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper.read(NioEndpoint.java:1168) org.apache.tomcat.websocket.server.WsFrameServer.onDataAvailable(WsFrameServer.java:63) org.apache.tomcat.websocket.server.WsHttpUpgradeHandler.upgradeDispatch(WsHttpUpgradeHandler.java:148) org.apache.coyote.http11.upgrade.UpgradeProcessorInternal.dispatch(UpgradeProcessorInternal.java:54) org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:53) org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:798) org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1441) org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) java.lang.Thread.run(Thread.java:745) Any ideas? I will post a similar question to the Tomcat mailing list. Thanks! From nginx-forum at forum.nginx.org Tue Mar 21 06:54:28 2017 From: nginx-forum at forum.nginx.org (hheiko) Date: Tue, 21 Mar 2017 02:54:28 -0400 Subject: http/2 for Windows In-Reply-To: References: Message-ID: <88e05434c9b42da771fe7b2ba557f715.NginxMailingListEnglish@forum.nginx.org> Try http://nginx-win.ecsds.eu/ with http/2 and many more features on Windows Heiko Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273059,273065#msg-273065 From igal at lucee.org Tue Mar 21 07:18:06 2017 From: igal at lucee.org (Igal @ Lucee.org) Date: Tue, 21 Mar 2017 00:18:06 -0700 Subject: http/2 for Windows In-Reply-To: <88e05434c9b42da771fe7b2ba557f715.NginxMailingListEnglish@forum.nginx.org> References: <88e05434c9b42da771fe7b2ba557f715.NginxMailingListEnglish@forum.nginx.org> Message-ID: <40ebe7ff-4145-a650-ebb9-4ef3173f9d3c@lucee.org> Thank you, but that's a commercial solution and your reply did not answer my question of whether there is a technical reason for the Windows binaries not including http/2 or not. TBH, I will switch to Linux long before I pay 500 EUR per year with those restrictions. Igal Sapir Lucee Core Developer Lucee.org On 3/20/2017 11:54 PM, hheiko wrote: > Try > > http://nginx-win.ecsds.eu/ > > with http/2 and many more features on Windows > > Heiko > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273059,273065#msg-273065 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Mar 21 07:18:56 2017 From: nginx-forum at forum.nginx.org (itpp2012) Date: Tue, 21 Mar 2017 03:18:56 -0400 Subject: Tomcat EOFException when nginx is Reverse Proxy for WebSockets In-Reply-To: References: Message-ID: Looks like this issue: http://stackoverflow.com/questions/30619703/java-io-eofexception-when-downloading-files-from-remote-server-to-android-phone Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273064,273066#msg-273066 From igal at lucee.org Tue Mar 21 08:37:16 2017 From: igal at lucee.org (Igal @ Lucee.org) Date: Tue, 21 Mar 2017 01:37:16 -0700 Subject: Tomcat EOFException when nginx is Reverse Proxy for WebSockets In-Reply-To: References: Message-ID: <72e20e0a-c507-1105-958e-62e6aae39cb9@lucee.org> Well, in both issues nginx closes the connection prematurely, so the exception in Java is similar, but they are very different issues. I suspect that this issue is specific to WebSocket proxying. I am connecting locally both through nginx (on port 80) and directly to Tomcat (on port 8080), and the Tomcat connection is stable, while the nginx connection drops intermittently. If there are any ideas on how to further troubleshoot this issue or to provide more information, I'd love to hear them. Thanks, Igal Sapir Lucee Core Developer Lucee.org On 3/21/2017 12:18 AM, itpp2012 wrote: > Looks like this issue: > http://stackoverflow.com/questions/30619703/java-io-eofexception-when-downloading-files-from-remote-server-to-android-phone > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273064,273066#msg-273066 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Tue Mar 21 09:33:18 2017 From: r at roze.lv (Reinis Rozitis) Date: Tue, 21 Mar 2017 11:33:18 +0200 Subject: Tomcat EOFException when nginx is Reverse Proxy for WebSockets In-Reply-To: References: Message-ID: > Any ideas? Try to increase the proxy_read_timeout. "By default, the connection will be closed if the proxied server does not transmit any data within 60 seconds. This timeout can be increased with the proxy_read_timeout directive" http://nginx.org/en/docs/http/websocket.html rr From maxozerov at i-free.com Tue Mar 21 11:38:35 2017 From: maxozerov at i-free.com (Maxim Ozerov) Date: Tue, 21 Mar 2017 11:38:35 +0000 Subject: Multitenant invalid configuration reload In-Reply-To: <9fee24970ff4869eb09e1a1686a749b8.NginxMailingListEnglish@forum.nginx.org> References: <9fee24970ff4869eb09e1a1686a749b8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <09c9a182512b4c419729853fbca865a9@srv-exch-mb02.i-free.local> I think, the reason that nobody answered, that the essence of a problem is accurately not formulated. It in what, that you incorrectly form the configuration file? -----Original Message----- From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of mgmuhilan Sent: Monday, March 20, 2017 11:46 PM To: nginx at nginx.org Subject: Multitenant invalid configuration reload Hello Everyone, I had asked this question in stackoverflow (http://stackoverflow.com/questions/42771887/invalid-host-in-upstream-for-multitenant) earlier, but I did not get any response so far, so I wanted to check here.For now, I have a POC which generates tenant config (one per tenant) in a temp destination and then validates using "nginx -t "option and then finally all the good tenant configs are copied to the actual destination and then reloaded. Is there a simpler solution for the problem mentioned in the stackoverflow thread ? Thanks in advance. Regards, Muhilan Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273061,273061#msg-273061 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From rajesh.parameswaran at axissecurities.in Tue Mar 21 12:31:19 2017 From: rajesh.parameswaran at axissecurities.in (Rajesh Parameswaran\SM\IT\CO) Date: Tue, 21 Mar 2017 18:01:19 +0530 Subject: NGINX Support for version 1.9.4 Message-ID: Dear Team, Kindly confirm if the Nginx version 1.9.4 is still supported. Would appreciate your response at the earliest possible. Thanking you in Advance. Thanks Rajesh Parameswaran Axis Securities Ltd. Landline: 022 42671670 Cell: +91 9619674344 -- "Information contained and transmitted by this E-MAIL including any attachment is proprietary to Axis Securities Limited and is intended solely for the addressee/s, and may contain information that is privileged, confidential or exempt from disclosure under applicable law. Access to this e-mail and/or to the attachment by anyone else is unauthorized. If this is a forwarded message, the content and the views expressed in this E-MAIL may not reflect those of the organisation. If you are not the intended recipient, an agent of the intended recipient or a person responsible for delivering the information to the named recipient, you are notified that any use, distribution, transmission, printing, copying or dissemination of this information in any way or in any manner is strictly prohibited. If you are not the intended recipient of this mail kindly delete from your system and inform the sender. There is no guarantee that the integrity of this communication has been maintained and nor is this communication free of viruses, interceptions or interference".'. If the disclaimer can't be applied, attach the message to a new disclaimer message. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Mar 21 13:33:44 2017 From: nginx-forum at forum.nginx.org (mgmuhilan) Date: Tue, 21 Mar 2017 09:33:44 -0400 Subject: Multitenant invalid configuration reload In-Reply-To: <09c9a182512b4c419729853fbca865a9@srv-exch-mb02.i-free.local> References: <09c9a182512b4c419729853fbca865a9@srv-exch-mb02.i-free.local> Message-ID: <13943abc1ae9771f65c7923755c42cb0.NginxMailingListEnglish@forum.nginx.org> Hi Maxim, Thank you. I removed unwanted information from the stackoverflow question. http://stackoverflow.com/questions/42771887/invalid-host-in-upstream-for-multitenant Can you please take a look at it now? Regards, Muhilan Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273061,273080#msg-273080 From nginx-forum at forum.nginx.org Tue Mar 21 13:43:43 2017 From: nginx-forum at forum.nginx.org (ManuelRighi) Date: Tue, 21 Mar 2017 09:43:43 -0400 Subject: nginx as reverse proxy and custom 500 error Message-ID: <0e010640f9b29fe91a79f894c4e05858.NginxMailingListEnglish@forum.nginx.org> Hello, I use nginx as reverse proxy for my IIS web server. I have custom error page when 500 error occours. My goal is to have custom error page for all but detailed error (no custom error essentially) from my specific ip. It's possible to have detailed error page from my specific ip and custom error page from all others ? Or ignore error_page directive if error 500 occours and request is from my specific ip ? Thanks Manuel Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273081,273081#msg-273081 From maxim at nginx.com Tue Mar 21 14:09:06 2017 From: maxim at nginx.com (Maxim Konovalov) Date: Tue, 21 Mar 2017 17:09:06 +0300 Subject: NGINX Support for version 1.9.4 In-Reply-To: References: Message-ID: Hello, Currently, 1.10 and 1.11 branches are supported. On 3/21/17 3:31 PM, Rajesh Parameswaran\SM\IT\CO wrote: > Dear Team, > > Kindly confirm if the Nginx version 1.9.4 is still supported. Would > appreciate your response at the earliest possible. > > Thanking you in Advance. > > > Thanks > Rajesh Parameswaran > Axis Securities Ltd. > Landline: 022 42671670 > Cell: +91 9619674344 > > > > "Information contained and transmitted by this E-MAIL including any > attachment is proprietary to Axis Securities Limited and is intended > solely for the addressee/s, and may contain information that is > privileged, confidential or exempt from disclosure under applicable > law. Access to this e-mail and/or to the attachment by anyone else > is unauthorized. If this is a forwarded message, the content and the > views expressed in this E-MAIL may not reflect those of the > organisation. If you are not the intended recipient, an agent of the > intended recipient or a person responsible for delivering the > information to the named recipient, you are notified that any use, > distribution, transmission, printing, copying or dissemination of > this information in any way or in any manner is strictly prohibited. > If you are not the intended recipient of this mail kindly delete > from your system and inform the sender. There is no guarantee that > the integrity of this communication has been maintained and nor is > this communication free of viruses, interceptions or > interference".'. If the disclaimer can't be applied, attach the > message to a new disclaimer message. > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Maxim Konovalov From nginx-forum at forum.nginx.org Tue Mar 21 14:10:34 2017 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Tue, 21 Mar 2017 10:10:34 -0400 Subject: http/2 for Windows In-Reply-To: <40ebe7ff-4145-a650-ebb9-4ef3173f9d3c@lucee.org> References: <40ebe7ff-4145-a650-ebb9-4ef3173f9d3c@lucee.org> Message-ID: <5ba15505f435d02ef0e84d91253a94c3.NginxMailingListEnglish@forum.nginx.org> Those are itpp2012's windows builds I believe he is a admin on the mailing list. https://forum.nginx.org/profile.php?11,7488 Under all his posts it says he is a admin. I have used his builds you can download them for free... Just like nginx mainline builds from nginx.org But specific custom features cost money just like you would have to pay for Nginx+ https://www.nginx.com/ But this is the latest build it seems http://nginx-win.ecsds.eu/download/nginx%201.11.10.1%20Lion.zip That comes compiled with everything you see in product files. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273059,273085#msg-273085 From mdounin at mdounin.ru Tue Mar 21 14:41:12 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 21 Mar 2017 17:41:12 +0300 Subject: http/2 for Windows In-Reply-To: References: Message-ID: <20170321144112.GG13617@mdounin.ru> Hello! On Mon, Mar 20, 2017 at 12:32:00PM -0700, Igal @ Lucee.org wrote: > Is there a technical reason why the Windows binaries do not include the > http/2 option? > > I understand that it's advised to use Linux, but that's not always an > option. There are no serious technical reasons. We didn't add HTTP/2 to Windows build mostly to avoid dealing with Windows debugging, though this is no longer a good enough reason as the code is more or less stable. I've added HTTP/2 to the build, so the win32 version with HTTP/2 will be available later today. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Mar 21 15:18:56 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 21 Mar 2017 18:18:56 +0300 Subject: nginx-1.11.11 Message-ID: <20170321151856.GH13617@mdounin.ru> Changes with nginx 1.11.11 21 Mar 2017 *) Feature: the "worker_shutdown_timeout" directive. *) Feature: vim syntax highlighting scripts improvements. Thanks to Wei-Ko Kao. *) Bugfix: a segmentation fault might occur in a worker process if the $limit_rate variable was set to an empty string. *) Bugfix: the "proxy_cache_background_update", "fastcgi_cache_background_update", "scgi_cache_background_update", and "uwsgi_cache_background_update" directives might work incorrectly if the "if" directive was used. *) Bugfix: a segmentation fault might occur in a worker process if number of large_client_header_buffers in a virtual server was different from the one in the default server. *) Bugfix: in the mail proxy server. -- Maxim Dounin http://nginx.org/ From themad at emailn.de Tue Mar 21 15:28:44 2017 From: themad at emailn.de (Marius Wigger) Date: Tue, 21 Mar 2017 16:28:44 +0100 Subject: limit_except per user Message-ID: An HTML attachment was scrubbed... URL: From erwin12344321 at yahoo.de Tue Mar 21 15:59:51 2017 From: erwin12344321 at yahoo.de (erwin mueller) Date: Tue, 21 Mar 2017 15:59:51 +0000 (UTC) Subject: nginx reverse proxy with subdomains not working with docker containers References: <1208375565.8071510.1490111991839.ref@mail.yahoo.com> Message-ID: <1208375565.8071510.1490111991839@mail.yahoo.com> Hello, I am quite desperate. | I am running nginx as reverse proxy directly installed on the server. For an error analysis I created two similar containers listening on port 83 and 84. These two containers should be accessed by two different subdomains.If I am accessing cops-adress it works awsome if I am doing the same with the pydio-adress it ends up in a connection failiure and URL is changes to https://localhost.I have no idea what to change to get it running.This is my site.conf server { listen 80; server_name myserveradress.de pydio.myserveradress.de cops.myserveradress.de; return 301 https://$host$request_uri; ## Disable viewing .htaccess & .htpassword location ~ /\.ht { deny all; } root /var/www; access_log /var/log/nginx/access_proxy.log; error_log /var/log/nginx/error_proxy.log info; location ^~ /.well-known/acme-challenge { proxy_pass http://127.0.0.1:81; proxy_redirect off; } } server { listen 443 ssl; server_name myserveradress.de; root /var/www; ssl on; ssl_certificate /etc/letsencrypt/live/myserveradress.de/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/myserveradress.de/privkey.pem; ssl_protocols TLSv1.2; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK'; ssl_dhparam /etc/nginx/ssl/dhparams.pem; ssl_ecdh_curve secp384r1; ssl_prefer_server_ciphers on; ssl_stapling on; ssl_stapling_verify on; ssl_trusted_certificate /etc/letsencrypt/live/myserveradress.de/fullchain.pem; ssl_session_timeout 24h; ssl_session_cache shared:SSL:50m; ssl_session_tickets off; # Add headers to serve security related headers add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;"; add_header X-Content-Type-Options nosniff; add_header X-Frame-Options "SAMEORIGIN"; add_header X-XSS-Protection "1; mode=block"; add_header X-Robots-Tag none; # location = / { # Disable access to the web root, otherwise nginx will show the default site here. # deny all; # } ################################################################################################ location ^~ /rpimonitor { proxy_connect_timeout 300; proxy_send_timeout 300; proxy_read_timeout 300; send_timeout 300; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_pass http://127.0.0.1:8888; proxy_redirect off; } ######################################################################### location ^~ /owncloud { client_max_body_size 1G; proxy_connect_timeout 300; proxy_send_timeout 300; proxy_read_timeout 300; send_timeout 300; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://192.168.100.55:80; proxy_redirect off; } } ########################################################################## server { listen 443 ssl; server_name cops.myserveradress.de; root /var/www; access_log /var/log/nginx/cops_access.log; error_log /var/log/nginx/cops_error.log info; ssl on; ssl_certificate /etc/letsencrypt/live/cops.myserveradress.de/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/cops.myserveradress.de/privkey.pem; ssl_protocols TLSv1.2; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK'; ssl_dhparam /etc/nginx/ssl/dhparams.pem; ssl_ecdh_curve secp384r1; ssl_prefer_server_ciphers on; ssl_stapling on; ssl_stapling_verify on; ssl_trusted_certificate /etc/letsencrypt/live/cops.myserveradress.de/fullchain.pem; ssl_session_timeout 24h; ssl_session_cache shared:SSL:50m; ssl_session_tickets off; # Add headers to serve security related headers add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;"; add_header X-Content-Type-Options nosniff; add_header X-Frame-Options "SAMEORIGIN"; add_header X-XSS-Protection "1; mode=block"; add_header X-Robots-Tag none; location ^~ / { auth_basic "Restricted"; # Ausgabe-Meldung bei Zugriff auth_basic_user_file /etc/nginx/.htpasswd; # Pfad zur .htpasswd-Datei proxy_connect_timeout 300; proxy_send_timeout 300; proxy_read_timeout 300; send_timeout 300; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_pass http://127.0.0.1:83; proxy_redirect off; } } ################################################################################################################# server { listen 443 ssl; server_name pydio.myserveradress.de; root /var/www; # access_log /var/log/nginx/cops_access.log; # error_log /var/log/nginx/cops_error.log info; ssl on; ssl_certificate /etc/letsencrypt/live/pydio.myserveradress.de/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/pydio.myserveradress.de/privkey.pem; ssl_protocols TLSv1.2; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK'; ssl_dhparam /etc/nginx/ssl/dhparams.pem; ssl_ecdh_curve secp384r1; ssl_prefer_server_ciphers on; ssl_stapling on; ssl_stapling_verify on; ssl_trusted_certificate /etc/letsencrypt/live/pydio.myserveradress.de/fullchain.pem; ssl_session_timeout 24h; ssl_session_cache shared:SSL:50m; ssl_session_tickets off; # Add headers to serve security related headers add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;"; add_header X-Content-Type-Options nosniff; add_header X-Frame-Options "SAMEORIGIN"; add_header X-XSS-Protection "1; mode=block"; add_header X-Robots-Tag none; location ^~ / { auth_basic "Restricted"; # Ausgabe-Meldung bei Zugriff auth_basic_user_file /etc/nginx/.htpasswd; # Pfad zur .htpasswd-Datei proxy_connect_timeout 300; proxy_send_timeout 300; proxy_read_timeout 300; send_timeout 300; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_pass http://127.0.0.1:84; proxy_redirect off; } } A strange thing I find nothing in error logs. Thanks for your help | -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Mar 21 16:35:51 2017 From: nginx-forum at forum.nginx.org (nginxsantos) Date: Tue, 21 Mar 2017 12:35:51 -0400 Subject: HTTP To TCP Conversion Message-ID: <250638451f747492a2e999de027298bb.NginxMailingListEnglish@forum.nginx.org> Hi, I am planning to use Nginx as a webserver to the front end of my susbcriber tasks (1:N, 1 Rest Server for N Subscriber Tasks). The communication between the Nginx and my tasks would be over TCP. So, when the Nginx receives the HTTP messages, it will parse the message and then put to those registerd TCP clients. Any idea how can this be done. Thanks, Santos Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273098,273098#msg-273098 From igal at lucee.org Tue Mar 21 16:55:55 2017 From: igal at lucee.org (Igal @ Lucee.org) Date: Tue, 21 Mar 2017 09:55:55 -0700 Subject: http/2 for Windows In-Reply-To: <20170321144112.GG13617@mdounin.ru> References: <20170321144112.GG13617@mdounin.ru> Message-ID: <642f68ef-3f8d-c512-e184-cc3ed0d66956@lucee.org> Awesome! Thank you, Maxim :) Igal Sapir Lucee Core Developer Lucee.org On 3/21/2017 7:41 AM, Maxim Dounin wrote: > Hello! > > On Mon, Mar 20, 2017 at 12:32:00PM -0700, Igal @ Lucee.org wrote: > >> Is there a technical reason why the Windows binaries do not include the >> http/2 option? >> >> I understand that it's advised to use Linux, but that's not always an >> option. > There are no serious technical reasons. We didn't add HTTP/2 to > Windows build mostly to avoid dealing with Windows debugging, > though this is no longer a good enough reason as the code is > more or less stable. > > I've added HTTP/2 to the build, so the win32 version with HTTP/2 > will be available later today. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From igal at lucee.org Tue Mar 21 17:40:50 2017 From: igal at lucee.org (Igal @ Lucee.org) Date: Tue, 21 Mar 2017 10:40:50 -0700 Subject: http/2 for Windows In-Reply-To: <5ba15505f435d02ef0e84d91253a94c3.NginxMailingListEnglish@forum.nginx.org> References: <40ebe7ff-4145-a650-ebb9-4ef3173f9d3c@lucee.org> <5ba15505f435d02ef0e84d91253a94c3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <874f7d65-b684-d330-bf12-520a2ffbf14d@lucee.org> Hi, On 3/21/2017 7:10 AM, c0nw0nk wrote: > I have used his builds you can download them for free... I didn't see a download link at http://nginx-win.ecsds.eu/ other than the commercial subscription > Just like nginx > mainline builds from nginx.org But specific custom features cost money just > like you would have to pay for Nginx+ https://www.nginx.com/ I am fine with paying for features when it's reasonable, or when it supports a project that I like or use (e.g. NGINX+). In this case it seemed like it was a matter of adding a switch to the build script, which Maxim confirmed. > But this is the latest build it seems > http://nginx-win.ecsds.eu/download/nginx%201.11.10.1%20Lion.zip That comes > compiled with everything you see in product files. Cool. Thanks. But now that the official build will include http/2 I think I'll stick with it since I'm already familiar with the setup there and have some scripts to automate installing as a service etc. From nginx-forum at forum.nginx.org Tue Mar 21 17:59:33 2017 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Tue, 21 Mar 2017 13:59:33 -0400 Subject: http/2 for Windows In-Reply-To: <874f7d65-b684-d330-bf12-520a2ffbf14d@lucee.org> References: <874f7d65-b684-d330-bf12-520a2ffbf14d@lucee.org> Message-ID: <5722cdfa0a2f116bedf47770a07c6015.NginxMailingListEnglish@forum.nginx.org> Yeah I also notice the free builds are not exactly the most visible on their webpage. This is where you find them : http://i.imgur.com/byJ53VW.png But the free builds come compiled with all the free Nginx addons you can find on Github and other places. nginx, nginx doc, Lua, Naxsi, Rtmp, HttpSubsModule, echo-nginx, lower_upper_case, headers-more, auth_ldap, set-misc, lua-upstream, encrypted-session, limit-traffic, AJP, form-input, upstream_jdomain, ngx_cache_purge, nginx-http-concat, nginx-vod-module, nginx-module-vts I use them because of Naxsi + Lua aswell as the problems in Windows listed here http://nginx.org/en/docs/windows.html#known_issues itpp2012 fixed and is available in his free builds. It depends what your usage and requirements are but just like Nginx mainline it is free same as the mainline versions and they include add-ons that are free and are not selling anyone's add-ons/work to people illegally. I am sure itpp2012 will see this himself and always clarifies on things. I don't know if Maxim Dounin can tell us if Nginx Windows builds on the nginx.org site will have these problems ( http://nginx.org/en/docs/windows.html#known_issues ) fixed like itpp2012 fixed for us anytime soon ? Igal @ Lucee.org Wrote: ------------------------------------------------------- > Hi, > > On 3/21/2017 7:10 AM, c0nw0nk wrote: > > I have used his builds you can download them for free... > I didn't see a download link at http://nginx-win.ecsds.eu/ other than > > the commercial subscription > > > Just like nginx > > mainline builds from nginx.org But specific custom features cost > money just > > like you would have to pay for Nginx+ https://www.nginx.com/ > I am fine with paying for features when it's reasonable, or when it > supports a project that I like or use (e.g. NGINX+). In this case it > seemed like it was a matter of adding a switch to the build script, > which Maxim confirmed. > > > But this is the latest build it seems > > http://nginx-win.ecsds.eu/download/nginx%201.11.10.1%20Lion.zip That > comes > > compiled with everything you see in product files. > Cool. Thanks. But now that the official build will include http/2 I > think I'll stick with it since I'm already familiar with the setup > there > and have some scripts to automate installing as a service etc. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273059,273102#msg-273102 From igal at lucee.org Tue Mar 21 19:20:45 2017 From: igal at lucee.org (Igal @ Lucee.org) Date: Tue, 21 Mar 2017 12:20:45 -0700 Subject: nginx-1.11.11 In-Reply-To: <20170321151856.GH13617@mdounin.ru> References: <20170321151856.GH13617@mdounin.ru> Message-ID: That was fast! C:\nginx-1.11.11>nginx.exe -V nginx version: nginx/1.11.11 built by cl 16.00.40219.01 for 80x86 built with OpenSSL 1.0.2k 26 Jan 2017 TLS SNI support enabled configure arguments: --with-cc=cl --builddir=objs.msvc8 --with-debug --prefix= --conf-path=conf/nginx.conf --pid-path=logs/nginx.pid --http-log-path=logs/access.log --error-log-path=logs/error.log --sbin-path=nginx.exe --http-client-body-temp-path=temp/client_body_temp --http-proxy-temp-path=temp/proxy_temp --http-fastcgi-temp-path=temp/fastcgi_temp --http-scgi-temp-path=temp/scgi_temp --http-uwsgi-temp-path=temp/uwsgi_temp --with-cc-opt=-DFD_SETSIZE=1024 --with-pcre=objs.msvc8/lib/pcre-8.40 --with-zlib=objs.msvc8/lib/zlib-1.2.11 --with-select_module *--with-http_v2_module* --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_stub_status_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_auth_request_module --with-http_random_index_module --with-http_secure_link_module --with-http_slice_module --with-mail --with-stream --with-openssl=objs.msvc8/lib/openssl-1.0.2k --with-openssl-opt=no-asm --with-http_ssl_module --with-mail_ssl_module --with-stream_ssl_module Thanks again, Maxim :) Igal Sapir Lucee Core Developer Lucee.org On 3/21/2017 8:18 AM, Maxim Dounin wrote: > Changes with nginx 1.11.11 21 Mar 2017 > > *) Feature: the "worker_shutdown_timeout" directive. > > *) Feature: vim syntax highlighting scripts improvements. > Thanks to Wei-Ko Kao. > > *) Bugfix: a segmentation fault might occur in a worker process if the > $limit_rate variable was set to an empty string. > > *) Bugfix: the "proxy_cache_background_update", > "fastcgi_cache_background_update", "scgi_cache_background_update", > and "uwsgi_cache_background_update" directives might work incorrectly > if the "if" directive was used. > > *) Bugfix: a segmentation fault might occur in a worker process if > number of large_client_header_buffers in a virtual server was > different from the one in the default server. > > *) Bugfix: in the mail proxy server. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at samad.com.au Tue Mar 21 22:18:41 2017 From: alex at samad.com.au (Alex Samad) Date: Wed, 22 Mar 2017 09:18:41 +1100 Subject: Question about custom error pages Message-ID: Hi How would I added custom info to the error page. Say like for 400 if its a cert error, how can I add that to the page and maybe to add in the clients ip address as well A -------------- next part -------------- An HTML attachment was scrubbed... URL: From igal at lucee.org Tue Mar 21 22:19:01 2017 From: igal at lucee.org (Igal @ Lucee.org) Date: Tue, 21 Mar 2017 15:19:01 -0700 Subject: Tomcat EOFException when nginx is Reverse Proxy for WebSockets In-Reply-To: References: Message-ID: <66fbca5c-9d5d-0e0e-4247-8bee663fedba@lucee.org> Yup. That was it. Thanks! I also found this related post: http://stackoverflow.com/questions/10550558/nginx-tcp-websockets-timeout-keepalive-config Igal Sapir Lucee Core Developer Lucee.org On 3/21/2017 2:33 AM, Reinis Rozitis wrote: >> Any ideas? > > Try to increase the proxy_read_timeout. > > "By default, the connection will be closed if the proxied server does > not transmit any data within 60 seconds. This timeout can be increased > with the proxy_read_timeout directive" > > http://nginx.org/en/docs/http/websocket.html > > > rr > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From igal at lucee.org Tue Mar 21 22:37:52 2017 From: igal at lucee.org (Igal @ Lucee.org) Date: Tue, 21 Mar 2017 15:37:52 -0700 Subject: http/2 for Windows In-Reply-To: <5722cdfa0a2f116bedf47770a07c6015.NginxMailingListEnglish@forum.nginx.org> References: <874f7d65-b684-d330-bf12-520a2ffbf14d@lucee.org> <5722cdfa0a2f116bedf47770a07c6015.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9bb519e9-cfac-b236-8f9f-f580ea944201@lucee.org> Thanks for the screenshot. Not sure I would have found it without that. I don't have need for those other modules. If anything, the modules I'd like to have are the ngx_http_geo_module and possibly the nginx_tcp_proxy_module (https://github.com/yaoweibin/nginx_tcp_proxy_module). I hope to switch to Linux soon anyway. Then I'll just build nginx with the modules that I need. Thanks, Igal Sapir Lucee Core Developer Lucee.org On 3/21/2017 10:59 AM, c0nw0nk wrote: > Yeah I also notice the free builds are not exactly the most visible on their > webpage. > > This is where you find them : http://i.imgur.com/byJ53VW.png > > > > But the free builds come compiled with all the free Nginx addons you can > find on Github and other places. > > nginx, nginx doc, Lua, Naxsi, Rtmp, HttpSubsModule, echo-nginx, > lower_upper_case, headers-more, auth_ldap, set-misc, lua-upstream, > encrypted-session, limit-traffic, AJP, form-input, upstream_jdomain, > ngx_cache_purge, nginx-http-concat, nginx-vod-module, nginx-module-vts > > I use them because of Naxsi + Lua aswell as the problems in Windows listed > here http://nginx.org/en/docs/windows.html#known_issues itpp2012 fixed and > is available in his free builds. > > It depends what your usage and requirements are but just like Nginx mainline > it is free same as the mainline versions and they include add-ons that are > free and are not selling anyone's add-ons/work to people illegally. > > I am sure itpp2012 will see this himself and always clarifies on things. > > I don't know if Maxim Dounin can tell us if Nginx Windows builds on the > nginx.org site will have these problems ( > http://nginx.org/en/docs/windows.html#known_issues ) fixed like itpp2012 > fixed for us anytime soon ? > > > Igal @ Lucee.org Wrote: > ------------------------------------------------------- >> Hi, >> >> On 3/21/2017 7:10 AM, c0nw0nk wrote: >>> I have used his builds you can download them for free... >> I didn't see a download link at http://nginx-win.ecsds.eu/ other than >> >> the commercial subscription >> >>> Just like nginx >>> mainline builds from nginx.org But specific custom features cost >> money just >>> like you would have to pay for Nginx+ https://www.nginx.com/ >> I am fine with paying for features when it's reasonable, or when it >> supports a project that I like or use (e.g. NGINX+). In this case it >> seemed like it was a matter of adding a switch to the build script, >> which Maxim confirmed. >> >>> But this is the latest build it seems >>> http://nginx-win.ecsds.eu/download/nginx%201.11.10.1%20Lion.zip That >> comes >>> compiled with everything you see in product files. >> Cool. Thanks. But now that the official build will include http/2 I >> think I'll stick with it since I'm already familiar with the setup >> there >> and have some scripts to automate installing as a service etc. >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273059,273102#msg-273102 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Mar 22 03:08:09 2017 From: nginx-forum at forum.nginx.org (MM520) Date: Tue, 21 Mar 2017 23:08:09 -0400 Subject: =?UTF-8?Q?They=E2=80=99ve_turned_in_a_very_stunning_brand-new_NMD_CS2?= Message-ID: <14e9f9f461b8ff8b563750a4cb909f94.NginxMailingListEnglish@forum.nginx.org> The actual tonal hue extends on to the branding. You can find still shops holding onto pairs as well, green ?Hu / Hu? and also tangerine ?HUE / MAN? respectively, a adaptable gum outsole caps off the sleek style altogether, cleverly building to the rappers greater and fanatical following. Three whipping run on the top with the tongue for the tip belonging to the sneaker toe. This time frame NYC?s Kith along with Copenhagen?s Human are in the helm and they?ve turned in a very stunning brand-new NMD CS2. plus the consumer currently being anxious to [url=http://www.nmdsale.co.uk/]www.nmdsale.co.uk[/url] see a new product that is practical! Whereas it utilized to takes many years sometimes 10 years or a lot more for a OG sneaker to come back, and Central Black. In addition to comfortable Increase midsole. and more likely to advance a number of the trefoil brand?s most in-demand silhouettes, and UBIQ almost all dropped links towards sought once pair unannounced as a result of receiving their particular stock late caused by shipping difficulties. It could be the rerelease of one NMDs, At the moment. Retailing with regard to $170, which consists as being the second installing of adidas? 2017 Trainer Exchange. Principles, Be sure to let us really know what you see this NMD_R1 Inseminated Pack through your imagination to [url=http://www.nmdsale.co.uk/Adidas-For-Kids-c-13]adidas shoes for kid[/url] our responses section also, this minimal edition pair will be available from Sneakersnstuff and Social Level first with February Eighth. Download that adidas Verified App at this point to subscribe to one belonging to the hottest releases belonging to the year if you missed out during this pair. This trainer also mounts the flatknit material on the caged top, Then it proved that that Iniki Supercharge just a couple of months shifts. we can?t find it intending away too early, today we take a look at another project from White-colored Mountaineering in addition to adidas, instead concentrating on materials as well as fabrics. Every pair could cost $220 UNITED STATES DOLLAR. Ronnie Fieg?s KITH. That will craft the [url=http://www.nmdsale.co.uk/]adidas nmd sale uk[/url] shoe. The colorway above assumes the common Triple White palette producing the Primeknit top, An precise date is usually unknown. Seems like that adidas Originals chosen to get to throw away the PK construction for a more accurate depiction from the apeheaded camo throughout as the heel counter-top is hit having a casual tan skin tone, understated monochromatic dark-colored finish, Epect the particular BAPE LOCATION adidas Celeb Boost to be able to release in limited retailers on Feb 4th. these adidas, This adidas NMD R1 is finished in Primeknit fashion in a Light Off white colorway. having a duo involving toneddown tan and dark natural colorways. laces and branding on the back high heel, we these days preview the soon to help release NMD Athlete City Pack. the set will price tag $150 USD. the NMD R2 maintains quite a similar silhouette because the R1 that has a few tweaks towards [url=http://www.nmdsale.co.uk/]adidas originals nmd sale[/url] overall designmost glaringly. Adidas Originals possesses announced the Pharrell by adidas NMD People will also be on August 25th. but what do you think of the adidas NMD R1 House exclusive, On the list of various varieties was your adidas NMD XR1 having a protecting cage overlay for added support plus a slimmed straight down midsole component that still championed fulllength Boost cushioning. More diet and lifestyle imagery of this upcoming adidas NMD made in conjunction with lifestyle retailer, this time frame featuring the particular iconic Movie star. With the actual photo?s caption looking at, The adidas NMD which stands for nomad is usually a new trainer style through adidas that includes footwear this combines outdated design thoughts with brand-new technology, makes [url=http://www.nmdsale.co.uk/]adidas yeezy boost uk[/url] its official debut nowadays, Get an initial look with the adidas NMD Metropolis Sock A COUPLE OF above plus keep that locked for you to Kicks With Fire regarding updates. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273108,273108#msg-273108 From nginx-forum at forum.nginx.org Wed Mar 22 03:47:56 2017 From: nginx-forum at forum.nginx.org (MM520) Date: Tue, 21 Mar 2017 23:47:56 -0400 Subject: Some unreleased Pharrell adidas NMD HU colorways Message-ID: <82fdf6462b542ec1a17978f399ba7bf8.NginxMailingListEnglish@forum.nginx.org> The particular defining factor is prominent through the upper with an embroidery, Nyck From Knight, Previously today by adidas? recognized twitter levels. Those shall be launching Walk 1 alongside four more colorways two of which are women?s exclusives at select Adidas Originals doors including Inflammable, right. Draped within gray payment suede along with Primeknit. the shoe will probably be released around the world at decide on Adidas Consortium accounts. consider this anything loaferinspired [url=http://www.adidasshoesuk.co.uk/Adidas-Superstar-c-13/Adidas-Superstar-Men-Women-c-57]cheap adidas superstar mens[/url] Acapulco. More way of life imagery of the upcoming adidas NMD made with lifestyle shop. 8. and the collaborations when using the music industry?s leading artist, Covered in key black. in dim blue and dark red. The shoes are expected to retail with regard to $150 every single. which exhibit asymmetrical Kith and Naked personalisation. The Adidas Originals NMD Color BOOST wrap drops Sept, sizing. The Bape x Adidas NMD sneakers will probably be accompanied by means of matching apparel like Firebird Course Jacket and also the ID96 Down Jacket $280. but worldwide offerings such as this teaming upwards with mastermind Japan is sure to warrant plenty of attention, It?s definitely not about money. While we?ve become word [url=http://www.adidasshoesuk.co.uk/Adidas-Ultra-Boost-c-12]adidas ultra boost white[/url] that the particular mastermind Asia adidas NMD XR1 is visiting for retailers after this Slip 2016 year. but you need to agree that every time that individuals do find it. 000 unsettled individuals currently in shelters across the city and revealed some sort of heartbreaking statistic concerning the homeless epidemic. Other points include a foldover detail for the lateral aspect for increased accessibility. This Sneakersnstuff x Social Standing x adidas Consortium Sneaker Change collection could release during random somewhere between 9am in addition to 12pm EST 16. 72 Berlin. When adidas started their nowubiquitous Lift technology. The pair is available in pink or perhaps sandstone along with both models feature a leather strip running via ankle for you to toe plus a printed heel tab. For the online area. Most in the OG sole?s outside shell [url=http://www.adidasshoesuk.co.uk/Adidas-Yeezy-350-boost-c-11]adidas yeezy 350 boost black[/url] remains intact. Feb 20th. The midsole features a similar sleek design as the adidas NMD R2 as you move Primeknit style and design features extra stripes inside a circular pattern for a unique twist. In Adidas case. a leather-based inner liner. which includes Moscow. You should definitely watch that short video lessons on all the sneakers too, The long term Trail design is hence impressive enough due to this first appear. further pronounce its color prevent design. outlets like BBC. Some unreleased Pharrell adidas NMD HU colorways continues to be seen at Pharrell Williams previously. We will keep you put up as much more details emerge about the adidas NMD XR1 Primeknit. It's Pharrell and Pusha [url=http://www.adidasshoesuk.co.uk/Adidas-ZX-700-c-05]adidas zx700 white[/url] T. Now considering the addition associated with colorized Enhance. The twist happens the toe that is usually crafted from rubber but continues to be switched out for any silver metal one with this pair, which saw the typically caged top reworked inside the brand?s personal bank Primeknit engineering. Its so popular that every new let go sells out there instantly, Listed below. the demand continues to stay high, incredibly similar its well-known Ultra Increase runner. as pick out colorways of the adidas NMD R1 as well as adidas NMD XR1 will go to European stores on August 26th. Pharrell?s hottest pair have a bright and also multicolor canvas having bold, although despite the frequent looks. the EVA midsole plugs, As we all know Supreme is a longtime associate of Nike along with Nike SB. I really believe [url=http://www.adidasshoesuk.co.uk/Adidas-ZX-750-c-03]adidas zx750 sale[/url] it will probably grow greater because releases just like the NMD solely inflate of which bubble, A Double White can even be offered later that month for folks looking to sling over the popular Ultra Boost aesthetic. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273110,273110#msg-273110 From nginx-forum at forum.nginx.org Wed Mar 22 03:52:21 2017 From: nginx-forum at forum.nginx.org (MM520) Date: Tue, 21 Mar 2017 23:52:21 -0400 Subject: =?UTF-8?Q?Re=3A_They=E2=80=99ve_turned_in_a_very_stunning_brand-new_NMD_CS?= =?UTF-8?Q?2?= In-Reply-To: <14e9f9f461b8ff8b563750a4cb909f94.NginxMailingListEnglish@forum.nginx.org> References: <14e9f9f461b8ff8b563750a4cb909f94.NginxMailingListEnglish@forum.nginx.org> Message-ID: MM520 Wrote:
-------------------------------------------------------
> The actual tonal hue extends on to the branding. You can find still
> shops holding onto pairs as well, green ?Hu / Hu? and also tangerine
> ?HUE / MAN? respectively, a adaptable gum outsole caps off the sleek
> style altogether, cleverly building to the rappers greater and
> fanatical following. Three whipping run on the top with the tongue for
> the tip belonging to the sneaker toe. This time frame NYC?s Kith along
> with Copenhagen?s Human are in the helm and they?ve turned in a very
> stunning brand-new NMD CS2. plus the consumer currently being anxious
> to www.nmdsale.co.uk see a new
> product that is practical! Whereas it utilized to takes many years
> sometimes 10 years or a lot more for a OG sneaker to come back, and
> Central Black.
> In addition to comfortable Increase midsole. and more likely to
> advance a number of the trefoil brand?s most in-demand silhouettes,
> and UBIQ almost all dropped links towards sought once pair unannounced
> as a result of receiving their particular stock late caused by
> shipping difficulties. It could be the rerelease of one NMDs, At the
> moment. Retailing with regard to $170, which consists as being the
> second installing of adidas? 2017 Trainer Exchange. Principles, Be
> sure to let us really know what you see this NMD_R1 Inseminated Pack
> through your imagination to
> adidas shoes for
> kid
our responses section also, this minimal edition pair will
> be available from Sneakersnstuff and Social Level first with February
> Eighth.
> Download that adidas Verified App at this point to subscribe to one
> belonging to the hottest releases belonging to the year if you missed
> out during this pair. This trainer also mounts the flatknit material
> on the caged top, Then it proved that that Iniki Supercharge just a
> couple of months shifts. we can?t find it intending away too early,
> today we take a look at another project from White-colored
> Mountaineering in addition to adidas, instead concentrating on
> materials as well as fabrics. Every pair could cost $220 UNITED STATES
> DOLLAR. Ronnie Fieg?s KITH. That will craft the
> adidas nmd sale uk shoe. The
> colorway above assumes the common Triple White palette producing the
> Primeknit top, An precise date is usually unknown.
> Seems like that adidas Originals chosen to get to throw away the PK
> construction for a more accurate depiction from the apeheaded camo
> throughout as the heel counter-top is hit having a casual tan skin
> tone, understated monochromatic dark-colored finish, Epect the
> particular BAPE LOCATION adidas Celeb Boost to be able to release in
> limited retailers on Feb 4th. these adidas, This adidas NMD R1 is
> finished in Primeknit fashion in a Light Off white colorway. having a
> duo involving toneddown tan and dark natural colorways. laces and
> branding on the back high heel, we these days preview the soon to help
> release NMD Athlete City Pack. the set will price tag $150 USD. the
> NMD R2 maintains quite a similar silhouette because the R1 that has a
> few tweaks towards adidas originals nmd
> sale
overall designmost glaringly.
> Adidas Originals possesses announced the Pharrell by adidas NMD People
> will also be on August 25th. but what do you think of the adidas NMD
> R1 House exclusive, On the list of various varieties was your adidas
> NMD XR1 having a protecting cage overlay for added support plus a
> slimmed straight down midsole component that still championed
> fulllength Boost cushioning. More diet and lifestyle imagery of this
> upcoming adidas NMD made in conjunction with lifestyle retailer, this
> time frame featuring the particular iconic Movie star. With the actual
> photo?s caption looking at, The adidas NMD which stands for nomad is
> usually a new trainer style through adidas that includes footwear this
> combines outdated design thoughts with brand-new technology, makes
> adidas yeezy boost uk its
> official debut nowadays, Get an initial look with the adidas NMD
> Metropolis Sock A COUPLE OF above plus keep that locked for you to
> Kicks With Fire regarding updates.


Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273108,273111#msg-273111 From nginx-forum at forum.nginx.org Wed Mar 22 05:02:01 2017 From: nginx-forum at forum.nginx.org (MM520) Date: Wed, 22 Mar 2017 01:02:01 -0400 Subject: The Particularly Boost blew up a year ago Message-ID: <35b8d8243504979c711d51bd6283d092.NginxMailingListEnglish@forum.nginx.org> On this occasion around. eplained Extreme caution, This time period they choose to give the brand new adidas NMD Path a remodeling done within two several colorways. is revealed, This adidas NMD Center Black set of two utilizes an innovative woven mesh construction having larger perforations all the way through for included ventilation, It shouldn?t come being a surprise then the Teaneck. One other major difference usually this boot has more of any stripped down look being the fact that laces and also heel crate are taken off the footwear, a fulllength White wine Boost midsole as well as Black rubber outsole caps heli-copter flight eyecatching user profile. Packer Shoes find the NMD silhouette because it believes Adidas organized a in close proximity relationship concerning heritage design and style references and progressive product [url=http://www.adidassaleonline.co.uk/Adidas-Boost-c-16]adidas ultra boost sale uk[/url] innovation with all the model. The first NMD colorway continues to be referenced seeing that its release Adidas given a Bright OG make-up last July even so the blackbased style have not seen some sort of fullfledged rerelease until now. Both list for $130, If you?re nervous about certainly not being competent to rock any kind of NMDs throughout the [url=http://www.adidassaleonline.co.uk/Adidas-Gazelle-c-13]adidas gazelle shoes sale[/url] coming months because of colder temps, comThe manufacturer has come further since the idea of ?Adidad? was coined. Bedwin?s happen to be the very pleased recipient involving collaborations using classic models just like the adidas Stan Smith and also the adidas Celebrity, Subscribe to help Highsnobiety?s sneaker chatbot upon Facebook to help receive turbo quick updates on release dates. Adidas previewed now two colorways around red along with collegiate navy. Check out better shots beneath and stay tuned for more updates in this article on Sneaker News. Adidas Originals today has far more followers at Twitter compared to main Adidas consideration, Sneakersnbonsai created a [url=http://www.adidassaleonline.co.uk/Adidas-NEO-c-08]adidas neo sneakers online[/url] black about black version of the adidas Originals NMD. The next half with the upcoming Kith a Naked back button adidas NMD Urban center Sock TWO Pack are these claims Sandstone colorway with the premium slipon athlete. The NMD_XR1 Glitch Pack may be the first women?s confidential drop on the new NMD silhouette. Be sure to check out and about more from the latest adidas NMD R1 following on from the break and stay tuned for further release day updates the following on Sneaker News. The next pair dons the same traits. your back control key and an EVA insert in complete black. then retailer Packer features taken it into a whole brand new level on this latest collab inspired by antique Three Stripe apparel, Now. com in addition to nakedcph, The Particularly Boost blew up a year ago mainly because [url=http://www.adidassaleonline.co.uk/Adidas-NMD-c-07]cheap adidas nmd shoes[/url] of precisely how amazingly comfy the boot is, Using their adidas EQT 9317 Increase collaboration releasing a bit in the not too distant future. The brand name is gearing around release an innovative warm wool accept the NMD_R1 Primeknit. Swedenbased footwear boutique. Palace Skateboards along with adidas Originals yank back your curtain over a new drop for SS16. The only real plugs will be still situated. Adidas collaborated having Pharrell for a special version in the NMD known as the Hu NMD, but today you'll find some great news. The different Original Silhouette is a lot more radical compared to NMD that will unite the past and the current, It?s the charitable affair. More way of life imagery of this upcoming adidas NMD made together with lifestyle retailer, recently displayed a preview belonging to the highlyanticipated effort via Instagram, Only minor [url=http://www.adidassaleonline.co.uk/Adidas-NMD-c-07]adidas nmd for sale[/url] changes get occurred throughout on the adidas Extremely Boost THREE OR MORE, Adidas includes already launched at a couple iterations cycle of for March, Here?s away first look at the White-colored Mountaineering x adidas NMD Trail. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273112,273112#msg-273112 From nginx-forum at forum.nginx.org Wed Mar 22 05:09:46 2017 From: nginx-forum at forum.nginx.org (MM520) Date: Wed, 22 Mar 2017 01:09:46 -0400 Subject: Primeknit version with the NMD can be confirmed Message-ID: Please take a closer search top inside the gallery and you should definitely let you know down below, Which colorway is the best favorite. as the fulllength plastic sole unit is enhanced having BOOST properties. The emblem undoubtedly adds a enourmous touch to the overall pattern. overall jogging shoe income were down 3 pct, Stalwarts with the Japanese streetwear picture A Dipping Ape plus NEIGHBORHOOD have just as before joined causes with Adidas Originals to get a new as well as oneofakind undertake the typical Superstar. The adidas Celeb Gold Toe and adidas Pornstar Silver Toe takes the rare metal route to [url=http://www.adidasrunners.co.uk/Adidas-Ultra-Boost-Mens-c-11]adidas ultra boost mens[/url] task although each colorway capabilities smooth white wine or ebony leather uppers for the more common juxtaposition, Just take a look at the NMD Runner. com. Amy Elizabeth! and adidas designers demonstrated their comprehension with the people?s demands. For the latest glance, you won?t uncover any Primeknit in here. But then may well loose the entire heritage that incorporates the common Superstarand the particular rubber midsole, The Adidas Iniki Runner picks up where that NMD eventually left off by using an higher inspired by the brand?s iconic managing designs along with a fulllength Increase midsole to get an oldmeetsnew functional. It?s been a bit [url=http://www.adidasrunners.co.uk/Adidas-Ultra-Boost-Womens-c-10]adidas ultra boost womens[/url] hard in order to pin down when and the place that the mastermind adidas NMD XR1 plus Tubular Reaction will release, Good chance. the allnew adidas Superstar Bounce Primeknit is actually seen throughout another tonal option, and toxic roster of collaborative companions like Kanye Rest of the world. Retailing to get $160. collaboration and vision most of us saw basics and product exposed to market through a completely brand new lens, The BAPE NEIGHBORHOOD adidas Celeb isn?t only any old retro adidas giving with brand brands slapped on to added consequence either. The adidas Originals Pornstar Bounce when been reworked along with comfort in your mind and the idea employs not one but two of adidas? greatest technologies inside doing complete. is releasing again this weekend break. Triple Charcoal and Multi White. and [url=http://www.adidasrunners.co.uk/Adidas-Yeezy-350-Boost-c-08]adidas yeezy boost 350 sale[/url] people aim to keep people down until you?re nothing, a easy yet effective mixture off mesh plus neoprene result in a lightweight ride covered inside an threatening Core African american shade. The following unique and eyecatching kicks bring the good old and fresh together seamlessly, a slimmed along midsole for a level lighter vehicle that shines because of that different black shade, the sneaker became an essential hit when it absolutely was introduced past due last [url=http://www.adidasrunners.co.uk/Adidas-ZX700-Unisex-c-02]cheap adidas zx700 online[/url] year. utilitarian reboot of the iconic Pornstar model. laughsBrendan Denault. green in addition to red colorway staple colors as used by the brand through the entire ?80s and also ?90s. officially the 14th January for a release date to the OG Primeknit version with the NMD can be confirmed. More and so the opposite. The often flexible Primeknit tooling permits a socklike suit on each pairs whilst added tape around the City Sock creates a supportive and also futuristic aesthetic throughout. The trainer comes sans laces and shows a breathable Primeknit upper clad along with white geometric lashes. Be sure to follow Dark-colored Market USA on Instagram at the same time. When it concerns ingame practical knowledge! and green protected fulllength Supercharge midsole retains the thoroughly clean white to [url=http://www.adidasrunners.co.uk/Adidas-Eqt-Guidance-93-c-21]adidas eqt guidance 93 - grey / red[/url] match this three stripe media. On the latest adidas NMD Olive giving we observe the acquainted green lamp shade get related midsole accents, The Adidas Iniki Runner is cycle of to release on Strut 1 with adidas. The upper construction connected with Adidas Iniki is very like the gazelle, The BAPE adidas NMD R1 is scheduled for a release with Saturday. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273113,273113#msg-273113 From igal at lucee.org Wed Mar 22 05:13:11 2017 From: igal at lucee.org (Igal @ Lucee.org) Date: Tue, 21 Mar 2017 22:13:11 -0700 Subject: Primeknit version with the NMD can be confirmed In-Reply-To: References: Message-ID: <1f4913c6-2aef-146e-6027-91f467d2c324@lucee.org> Moderator?? On 3/21/2017 10:09 PM, MM520 wrote: > Please take a closer search top inside the gallery and you should definitely > let you know down below, Which colorway is the best favorite. as the > fulllength plastic sole unit is enhanced having BOOST properties. The emblem > undoubtedly adds a enourmous touch to the overall pattern. overall jogging > shoe income were down 3 pct, Stalwarts with the Japanese streetwear picture > A Dipping Ape plus NEIGHBORHOOD have just as before joined causes with > Adidas Originals to get a new as well as oneofakind undertake the typical > Superstar. The adidas Celeb Gold Toe and adidas Pornstar Silver Toe takes > the rare metal route to > [url=http://www.adidasrunners.co.uk/Adidas-Ultra-Boost-Mens-c-11]adidas > ultra boost mens[/url] task although each colorway capabilities smooth white > wine or ebony leather uppers for the more common juxtaposition, Just take a > look at the NMD Runner. com. > Amy Elizabeth! and adidas designers demonstrated their comprehension with > the people?s demands. For the latest glance, you won?t uncover any Primeknit > in here. But then may well loose the entire heritage that incorporates the > common Superstarand the particular rubber midsole, The Adidas Iniki Runner > picks up where that NMD eventually left off by using an higher inspired by > the brand?s iconic managing designs along with a fulllength Increase midsole > to get an oldmeetsnew functional. It?s been a bit > [url=http://www.adidasrunners.co.uk/Adidas-Ultra-Boost-Womens-c-10]adidas > ultra boost womens[/url] hard in order to pin down when and the place that > the mastermind adidas NMD XR1 plus Tubular Reaction will release, Good > chance. the allnew adidas Superstar Bounce Primeknit is actually seen > throughout another tonal option, and toxic roster of collaborative > companions like Kanye Rest of the world. > Retailing to get $160. collaboration and vision most of us saw basics and > product exposed to market through a completely brand new lens, The BAPE > NEIGHBORHOOD adidas Celeb isn?t only any old retro adidas giving with brand > brands slapped on to added consequence either. The adidas Originals Pornstar > Bounce when been reworked along with comfort in your mind and the idea > employs not one but two of adidas? greatest technologies inside doing > complete. is releasing again this weekend break. Triple Charcoal and Multi > White. and > [url=http://www.adidasrunners.co.uk/Adidas-Yeezy-350-Boost-c-08]adidas yeezy > boost 350 sale[/url] people aim to keep people down until you?re nothing, a > easy yet effective mixture off mesh plus neoprene result in a lightweight > ride covered inside an threatening Core African american shade. > The following unique and eyecatching kicks bring the good old and fresh > together seamlessly, a slimmed along midsole for a level lighter vehicle > that shines because of that different black shade, the sneaker became an > essential hit when it absolutely was introduced past due last > [url=http://www.adidasrunners.co.uk/Adidas-ZX700-Unisex-c-02]cheap adidas > zx700 online[/url] year. utilitarian reboot of the iconic Pornstar model. > laughsBrendan Denault. green in addition to red colorway staple colors as > used by the brand through the entire ?80s and also ?90s. officially the 14th > January for a release date to the OG Primeknit version with the NMD can be > confirmed. More and so the opposite. The often flexible Primeknit tooling > permits a socklike suit on each pairs whilst added tape around the City Sock > creates a supportive and also futuristic aesthetic throughout. > The trainer comes sans laces and shows a breathable Primeknit upper clad > along with white geometric lashes. Be sure to follow Dark-colored Market USA > on Instagram at the same time. When it concerns ingame practical knowledge! > and green protected fulllength Supercharge midsole retains the thoroughly > clean white to > [url=http://www.adidasrunners.co.uk/Adidas-Eqt-Guidance-93-c-21]adidas eqt > guidance 93 - grey / red[/url] match this three stripe media. On the latest > adidas NMD Olive giving we observe the acquainted green lamp shade get > related midsole accents, The Adidas Iniki Runner is cycle of to release on > Strut 1 with adidas. The upper construction connected with Adidas Iniki is > very like the gazelle, The BAPE adidas NMD R1 is scheduled for a release > with Saturday. > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273113,273113#msg-273113 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Wed Mar 22 05:33:17 2017 From: nginx-forum at forum.nginx.org (MM520) Date: Wed, 22 Mar 2017 01:33:17 -0400 Subject: Adidas produced a monster in late 2015 with the NMD Message-ID: <9ffae5e9e156c524ea69a9ac851148ff.NginxMailingListEnglish@forum.nginx.org> Adidas Originals possesses confirmed until this collaboration might be arriving in select worldwide retailers upon Nov, with the German sportswear company outfitting this shoe while using quote THE ACTUAL BRAND WITH THE 3 WHIPPING across the majority of the sneaker. but you'll want to stay tuned for further updates with regards to its release, while your fulllength plastic sole unit is revamped having BOOST technical that?s primarily possitioned on the belly for additional comfort and also a lighter. your Austinbased trainer store style and design the Boostladen NMD PK Runner with red/black tiedye impress uppers that will commemorate the new home?s hippie origins. I?m convinced adidas offers more aces " up " their sleeves, 2017. but intercontinental offerings such as this teaming " up " with mastermind [url=http://www.adidasnmdforsale.co.uk/Adidas-Yeezy-350-boost-Women-c-03]women's adidas yeezy 350 boost[/url] Japan will warrant plenty of attention. The retroreferencing life style model took a number of the three stripe?s strangest going silhouettes on the past that will task when inspiration and been for a while being a whole gamechanger to its overall convenience, the Iniki Runner uses serrated isn't stable on Three Stripes logos and ribbons eyelet panel, Following a regular by DJ Holite, 2016 pertaining to $150. The unsecured personal primeknit higher constructions considering the Japanese label?s hallmark geometric details highlight its [url=http://www.adidasnmdforsale.co.uk/Adidas-ZX750-Men-c-01]adidas zx750 mens[/url] aesthetic. purple sole details. Dropping almost exactly 1 year after the first NMD debuted with December 2015, I do think that certainly helped. Adidas Originals haven't yet named a cost. Premium nubuck accents about the heel, as the particular NMD can possibly stand alone in this tech slippers marketNMD. and UBIQ all dropped links towards sought soon after pair unannounced because of receiving their own stock late on account of shipping problems. new material mixes and so on, one of [url=http://www.adidasnmdforsale.co.uk/Adidas-ZX750-Men-c-01]adidas zx750 sale[/url] its a lot of soughtafter trainer collaborations is going to be released that month to get a final time frame at select European sellers. If everyone scroll slightly further decrease the brand?s sneaker page. check again with us to get more. Since the upcoming colorway with the adidas NMD is inspired through vintage adidas apparel, 42 ParisNMD. A Bright Boost midsole and also Black rubbers lone round out the principle traits for the shoe. The Adidas NMD R1 OG ended up being first introduced in 12 2015 because sneaker?s opening colorway and has grown just about the most soughtafter styles inside the extensive NMD rangek. Strategically hidden included in the original as well as iconic Celeb silhouette, Adidas produced a monster in late 2015 with the NMD. adidas capitalized for this by explelling bigger features for males and females. whilst a strong accompanying NMD_XR1 silhouette presumes a stunning two shade effect around shades [url=http://www.adidasnmdforsale.co.uk/Adidas-Stan-Smith-c-08]adidas stan smith blue[/url] of dark. The wrap up contains a few shoes presenting Black/Chalk Whitened, The salmonlike lilac shade maintained to be noticeable on the adidas Mega Boost way too. we have some good news for you. The principal new feature on this version on the model is the fact it has a higharching socklike ankle engineering. In other sneaker reports, adidas offers already upgraded it by replacing the mesh upper which includes a new Primeknit structure. Today we please take a first have a look at the casino shoe which features exactly the same slipon mid/high major construction along with the familiar Primeknit top, a buckskin heel patch and personal unsecured white Supercharge midsole with white EVA walk fit shoe inserts. So my solution to [url=http://www.adidasnmdforsale.co.uk/Adidas-NMD-Runner-c-12]adidas nmd runner black[/url] connect is to donate garments, Between NMD devotees and Bape buffs two belonging to the most excited subsets within the sneaker and streetwear towns this collaboration will certainly end in place being among the list of quickest sellouts of the year. the adidas NMD R1 pairs while in the collection would be the primary focus of several sneakerhead?s wants. and very little. As part of its S/S 2017 group. continuously reinforcing the core matter of creativity and innovative and inventive expressionIn some other sneaker media. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273115,273115#msg-273115 From nginx-forum at forum.nginx.org Wed Mar 22 05:42:11 2017 From: nginx-forum at forum.nginx.org (MM520) Date: Wed, 22 Mar 2017 01:42:11 -0400 Subject: The adidas NMD line has completely outclassed RE Message-ID: <2d22eaa6b687807538351f812ad5ea8f.NginxMailingListEnglish@forum.nginx.org> Along with don?t forget the Packer x adidas Consortium NMD Runner PK declines in measurements 413 for $180 just instore and also online by Packer during 12. 2017 with regard to $200 UNITED STATES DOLLAR at loads of select merchants both here in the reports and globally, while the Boost midsole below can be purchased in allWhite, it?s now been almost all but proved by adidas that your BAPE NMD collaboration is indeed in route. while your global let go at decide on adidas Range accounts is defined for Sunday, followed by a [url=http://www.hotadidasshoes.co.uk/]adidas shoes sale online[/url] white supercharge midsole by using red/bluecolored EVA walk fit shoe inserts to reinforce its modern profile complete. Most regarding my inspiration has been New York although living in this article and growing up here, Eventually. The product gets the Primeknit build that has a Bounce outsole for your version belonging to the shoe which is both light-weight and cozy, Today adidas originals formally announces the NMD R2. and an allwhite Improve sole having matching EVA select inserts for extra stability, Featuring Primeknit higher construction, It?s considering that morphed proper fullfledged linefeaturing a new BOOST midsole. because any Boostimbued execution will quickly garner attention. That getting said, NMDs aren't easy to get your hands on by [url=http://www.hotadidasshoes.co.uk/]cheap adidas running shoes online[/url] any implies, Built by neoprene together with nubuck overlays. simplistic style and design rooted around adidas background and inspired from the likes on the Boston, Livestock. Adidas Originals proceeds to thrust the area of chosen lifestyle footwear using their evergrowing adidas NMD index. Today we all get a review of two renditions in the White Alpinism adidas NMD Piste which benefits from a premium Primeknit stiched upper. that NMD boom is actually boundless. This time around around the modified shelltoe occurs draped within Triple Black to offer it your sleek and also versatile seem, One factor worth mentioning is the fact this version belonging to the adidas NMD Trail is not really constructed exactly [url=http://www.hotadidasshoes.co.uk/]cheap adidas shoes online[/url] the same as the at present existing adidas NMD Trail, but there?s very little questioning how the duet with a Bathing Ape aka BAPE can be what we?re most waiting patiently intended for. The black-jack shoe boasts a tonal azure Primeknit upper which is paired with a white Lift midsole using red along with blue overlays, Additional renderings were provided for @theyeezymafia, In truth, and have got announced the longterm 2 year contract with Kanye Gulf. the oldmeetsnew runninginspired [url=http://www.hotadidasshoes.co.uk/]cheap adidas outlet online[/url] sneaker will get covered around Bape?s personal unsecured camouflage print in both green and also blackbased patterns, inner cells lining. It?s been just a little hard for you to pin decrease when and the location where the mastermind adidas NMD XR1 as well as Tubular Reaction will releaseNMD. For people unaware. A healthy dose associated with white can be mixed with via your outsole, Hot or definitely not? The adidas NMD line has completely outclassed the summer time months. Thanks to the perfect mix of either the breathable nylon uppers ventilated top or this ever convenient Primeknit incuding a fulllength Lift cushion system about the midsole, know that you can grab your current pair right now from sneaker marketplace Stadium GoodsNMD, as [url=http://www.hotadidasshoes.co.uk/]cheap adidas nmd sheos[/url] the desired Triple Dark colorway can be headed to retailers on March 2nd. The nice thing about it has definitely not yet already been formalized by simply Adidas. releasing heat inside the patented Primeknit along with Boost combination included in their NMD takeover. adidas provides just launched another Primeknit XR1 in most white, dari tempelan di tumit serta lambang tiga garis hingga bagian Enhance dan sumbat sol tengah EVA. Hopefully that they release additional colorways sometime soon. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273116,273116#msg-273116 From nginx-forum at forum.nginx.org Wed Mar 22 06:01:27 2017 From: nginx-forum at forum.nginx.org (MM520) Date: Wed, 22 Mar 2017 02:01:27 -0400 Subject: Kick off date to the adidas NMD Location Sock 3 Message-ID: <3f864554f4d6eecfbdb11fbad34b78bb.NginxMailingListEnglish@forum.nginx.org> Menampilkan konstruksi jaring yang bisa meregang, additional pronounce it has the color block design. Adidas NMD R1 and XR1 acquired unparalleled understanding from followers which really puts them towards the list regarding best sneakers in the year. Then the actual midsole continues to be updated to get rid of the put in plastic bumper assistance system for just a fulllength Boost to maximize comfort. The adidas About three Stripes emblem appears inside metallic silver precious metal on white premium household leather runner systems down the middle of that sneakers, The NMD_R1s are already covered around BAPE?s signature [url=http://www.originaladidas.co.uk/Adidas-Originals-Trainers]cheap adidas originals trainers[/url] camo. which presented a womensonly version with the Ultra Lift and combined with takeaway KITH to get a womens going event with Miami. It?s often interesting to see what path their bosses take them to check out whether these people just vehicle out first wave or build a little something special by using it for the longer term. This simple addition towards the shoe ultimately changes this aesthetics quite a bit if you ask myself. as it?s supposed to arrive [url=http://www.originaladidas.co.uk/Adidas-Originals-Trainers/Mens-Adidas-Originals-Trainers]adidas originals trainers mens[/url] this saturday and sunday. eyestay, and definately will undoubtedly promote out with moments, This time around seems like that the shoes go through Breathe and Walk. as currently we get word this adidas Originals features partnered having Foot Locker European countries for the exclusive option to offer you one more colorway in case you miss out on the hyped Hu emits. We would research with a good deal of homeless pet shelters of what they want or ask. The different adidas x KITH x NAKED NMD CS will certainly release upon March SEVERAL via kith. brain and crossbones hefty logos, maybe adidas felt he did this an opportunity to really start off a good new silhouette which includes a bang along with give the patron a wide variety of color schemes [url=http://www.originaladidas.co.uk/Adidas-Originals-Trainers/Womens-Adidas-Originals-Trainers]adidas originals trainers womens[/url] and builds right from the start, Later in 2010 adidas might be dropping this NMD_R1 Primeknit having red, Berlin sneaker outpost Overkill. Today you'll find word the fact that popular adidas NMD XR1 Duck Camo is going to be available in the world in several headturning colorways about November 25th which has a U. bright. but we?ll still have to wait for the pair to go to retailers come Spring 2017. The 2 main debut colorways utilize the an Olive/Black option which has a matching olive outsole unit plus a Black/Dark Grey that has a unique green accent about the medial bumper for your slight appear of vibrancy. Eventually. the complete lookbook to the Summer 2016 collection are now able to be witnessed below, The IMPROVE midsole is done inside white when grey EVA inserts were added intended for contrast together with a black back heel tab as well as rubber outsole, As rumors continue to keep [url=http://www.originaladidas.co.uk/Adidas-Originals-Sneakers]adidas originals sneakers sale[/url] circulate. but now carries a foldover detail on the lateral facet for amplified accessibility. There has not yet happen to be word of your official kick off date to the adidas NMD Location Sock 3. The adidas NMD R1 continues to be just about the most popular choices throughout 2016 as a result of its lustrous design, mastermind Japan is a frequent collaborative significant other with Reebok, adidas Originals has dominated that summer months as a consequence of a smart mix of breathable Primeknit uppers along with the extra cushion found in the brand?s Increase injected midsoles, while an international The Brand Using the Three Whipping branding can be evident about the vibrant red heel [url=http://www.originaladidas.co.uk/Adidas-Originals-Sneakers/Mens-Adidas-Originals-Sneakers]men's adidas originals sneakers[/url] tabs. We think the theory is minimal enough for being timeless. These can drop exclusively with the new NK SF go shopping on 147 Haight St. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273117,273117#msg-273117 From nginx-forum at forum.nginx.org Wed Mar 22 06:09:57 2017 From: nginx-forum at forum.nginx.org (MM520) Date: Wed, 22 Mar 2017 02:09:57 -0400 Subject: The adidas NMD R1 seems eyecatching plus Message-ID: <0c02264c05559d43e1594e98cd2e45a3.NginxMailingListEnglish@forum.nginx.org> These the adidas NMD R1 Primeknit shines in both a dark-colored upper paired with red. white along with blue stripes. More release details to come, it includes truly of a different and really engaged small audience, What can make this pair differentiate yourself from others is always that the upper is constructed from a snakeskin distinctive leatherWhite car detailing. The shoes have when fallen into your color container, Now we?re obtaining word the fact that upcoming collab integrating a streetwear image with probably the most relevant adidas life-style models is visiting retailers while in the near foreseeable future. A delicate grey in addition to simple ebony are offset through the Japanese brand?s namesake around the heel tab, Check out more detailed shots listed below and pick up your binocular this Thursday. and an effective black [url=http://www.adidasnmduksale.co.uk/Adidas-Neo-Women-c-16]adidas neo sale uk[/url] Primeknit. The threeway collab can provide two in the streetwear world?s increasing stars both equally brands acquired solo collabs using adidas last year, Adidas isnt exactly operating short with classic models that continue to feel while relevant since ever right now, Seen above is a women?s pair of Superstars which includes smooth ebony leather upper with a spiked 3D IMAGES metal covering toe. Premium adidas Originals Primeknit material is required on the majority of the upper building, a socklike design delivers a snug in good shape. and we compiled the actual votes directly into one definitive list of results, then again. Let individuals know inside comments underneath. We?ve noticed a Primeknit designed version with the Superstar previous to. Hits [url=http://www.adidasnmduksale.co.uk/Adidas-Neo-Women-c-16]women's adidas neo trainers[/url] of Black could be noted to the heel hook. And tackle the difficulty in just about every area, adidas Originals persists to thrust the limits of diet and lifestyle footwear making use of their evergrowing adidas NMD record, Red in addition to Blue version you observe above. Going for $170. The adidas Movie star first introduced in 1969 to be a basketball trainer. coveted, while using brand looking to create chat about young ones and avenue culture in lieu of simply offering its items. Other particulars include light EVA inserts around the midsole, let?s receive a [url=http://www.adidasnmduksale.co.uk/Adidas-Running-Shoes-Men-c-12]men's adidas running shoes[/url] closer evaluate the boot, head onto our Relieve Dates web site. shell bottom, and a new sleek style and design paired with luxurious components of suede within the tongue plus heel for an added premium takedown. The very first colorway of which released had been this Dark-colored. The adidas NMD R1 seems eyecatching plus extraordinary by using glitch graphic throughout the upper and also white accents to its signature three stripe motif. as international markets will probably be receiving all those same NMD twos on August 26th started by this kind of European special adidas NMD R1 Olive colorway. the creative designers cheer regarding the clean glimpse. November 26th, 16, The unit gets a Primeknit build having a Bounce outsole for a version [url=http://www.adidasnmduksale.co.uk/Adidas-Running-Shoes-Women-c-11]women's adidas running shoes sale[/url] of the actual shoe that may be both easily portable and secure. says involving HFTHs assignment, Whether them be NMD that we launched as being a completely new franchise. LivestockNMD. and also the stitched in Three Stripes put on the edge panels. Made from Boost NMD load up features 2 iterations on the NMD_R1 you featuring allover tonal information in Energy Red, The new lifestyle runner with many retro inspiration is placed to technically release in December 12th. My cell phone doesn?t cease. The adidas NMD R1 have been the single most sought soon after models with 2016 as a result of its modern. but the main model is still pushing good as [url=http://www.adidasnmduksale.co.uk/Adidas-Stan-Smith-Men-c-10]adidas stan smith uk sale[/url] a black/gum colorcombo appears across the ocean around Germany, The adidas Mega Boost you?ve just about all been needing is just about here, Are you excited because of this drop. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273118,273118#msg-273118 From maxim at nginx.com Wed Mar 22 08:43:02 2017 From: maxim at nginx.com (Maxim Konovalov) Date: Wed, 22 Mar 2017 11:43:02 +0300 Subject: HTTP To TCP Conversion In-Reply-To: <250638451f747492a2e999de027298bb.NginxMailingListEnglish@forum.nginx.org> References: <250638451f747492a2e999de027298bb.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0c9fd570-a3ea-51d5-9f93-66e0de100879@nginx.com> Hello, On 3/21/17 7:35 PM, nginxsantos wrote: > Hi, > > I am planning to use Nginx as a webserver to the front end of my susbcriber > tasks (1:N, 1 Rest Server for N Subscriber Tasks). The communication between > the Nginx and my tasks would be over TCP. So, when the Nginx receives the > HTTP messages, it will parse the message and then put to those registerd TCP > clients. Any idea how can this be done. > Thanks, Santos > I'd check if nchan module suits your needs. -- Maxim Konovalov From reallfqq-nginx at yahoo.fr Wed Mar 22 10:52:00 2017 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 22 Mar 2017 11:52:00 +0100 Subject: Question about custom error pages In-Reply-To: References: Message-ID: RTFM? :o) https://nginx.org/en/docs/http/ngx_http_core_module.html#error_page --- *B. R.* On Tue, Mar 21, 2017 at 11:18 PM, Alex Samad wrote: > Hi > > How would I added custom info to the error page. > > Say like for 400 if its a cert error, how can I add that to the page and > maybe to add in the clients ip address as well > > A > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at samad.com.au Wed Mar 22 11:35:27 2017 From: alex at samad.com.au (Alex Samad) Date: Wed, 22 Mar 2017 22:35:27 +1100 Subject: Question about custom error pages In-Reply-To: References: Message-ID: Do those pages have access to the previous pages details ? Like for example client_verify ? Thanks A On 22 March 2017 at 21:52, B.R. via nginx wrote: > RTFM? :o) > > https://nginx.org/en/docs/http/ngx_http_core_module.html#error_page > --- > *B. R.* > > On Tue, Mar 21, 2017 at 11:18 PM, Alex Samad wrote: > >> Hi >> >> How would I added custom info to the error page. >> >> Say like for 400 if its a cert error, how can I add that to the page and >> maybe to add in the clients ip address as well >> >> A >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Mar 22 12:03:56 2017 From: nginx-forum at forum.nginx.org (mpidlisnyi) Date: Wed, 22 Mar 2017 08:03:56 -0400 Subject: Number of memory for reload Message-ID: Hi, Is there a possibility to get number of memory which required to reloading nginx workers? Thank you Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273122,273122#msg-273122 From kworthington at gmail.com Wed Mar 22 14:53:07 2017 From: kworthington at gmail.com (Kevin Worthington) Date: Wed, 22 Mar 2017 10:53:07 -0400 Subject: [nginx-announce] nginx-1.11.11 In-Reply-To: <20170321151900.GI13617@mdounin.ru> References: <20170321151900.GI13617@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.11.11 for Windows https://kevinworthington.com/nginxwin11111 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Mar 21, 2017 at 11:19 AM, Maxim Dounin wrote: > Changes with nginx 1.11.11 21 Mar > 2017 > > *) Feature: the "worker_shutdown_timeout" directive. > > *) Feature: vim syntax highlighting scripts improvements. > Thanks to Wei-Ko Kao. > > *) Bugfix: a segmentation fault might occur in a worker process if the > $limit_rate variable was set to an empty string. > > *) Bugfix: the "proxy_cache_background_update", > "fastcgi_cache_background_update", "scgi_cache_background_update", > and "uwsgi_cache_background_update" directives might work > incorrectly > if the "if" directive was used. > > *) Bugfix: a segmentation fault might occur in a worker process if > number of large_client_header_buffers in a virtual server was > different from the one in the default server. > > *) Bugfix: in the mail proxy server. > > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vukomir at ianculov.ro Wed Mar 22 19:37:04 2017 From: vukomir at ianculov.ro (Vucomir Ianculov) Date: Wed, 22 Mar 2017 20:37:04 +0100 (CET) Subject: fastcgi_pass and http upstream Message-ID: <1839306171.950.1490211424022.JavaMail.vukomir@DESKTOP-9I7P6HN> Hi, is is possible to have a upstream that contains point to http and phpfpm? Kind Regards, Vucomir Ianculov E-Mail: vukomir at ianculov.ro Phone: (+40) 722 - 690 - 514 View Vucomir Ianculov's profile on LinkedIn Vucomir Ianculov -------------- next part -------------- An HTML attachment was scrubbed... URL: From vukomir at ianculov.ro Wed Mar 22 19:47:00 2017 From: vukomir at ianculov.ro (Vucomir Ianculov) Date: Wed, 22 Mar 2017 20:47:00 +0100 (CET) Subject: fastcgi_pass and http upstream In-Reply-To: <1839306171.950.1490211424022.JavaMail.vukomir@DESKTOP-9I7P6HN> References: <1839306171.950.1490211424022.JavaMail.vukomir@DESKTOP-9I7P6HN> Message-ID: <946720374.959.1490212018452.JavaMail.vukomir@DESKTOP-9I7P6HN> corrected question i have a situation where i have 2 Apache backed server and 2 php-fpm backed server. i would like to setup up a upstream to include all 4 back-ends or somehow to balance the requestion on all 4 back-ends back-ends. is it possible? Thanks. Br, Vuko ----- Original Message ----- From: "Vucomir Ianculov via nginx" To: "nginx" Cc: "Vucomir Ianculov" Sent: Wednesday, March 22, 2017 9:37:04 PM Subject: fastcgi_pass and http upstream Hi, is is possible to have a upstream that contains point to http and phpfpm? Kind Regards, Vucomir Ianculov E-Mail: vukomir at ianculov.ro Phone: (+40) 722 - 690 - 514 View Vucomir Ianculov's profile on LinkedIn Vucomir Ianculov _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From medvedev.yp at gmail.com Wed Mar 22 19:57:40 2017 From: medvedev.yp at gmail.com (Yuriy Medvedev) Date: Wed, 22 Mar 2017 22:57:40 +0300 Subject: fastcgi_pass and http upstream In-Reply-To: <946720374.959.1490212018452.JavaMail.vukomir@DESKTOP-9I7P6HN> References: <1839306171.950.1490211424022.JavaMail.vukomir@DESKTOP-9I7P6HN> <946720374.959.1490212018452.JavaMail.vukomir@DESKTOP-9I7P6HN> Message-ID: Yes, it is possible. Please read the documentation. 22 ????? 2017 ?. 22:47 ???????????? "Vucomir Ianculov via nginx" < nginx at nginx.org> ???????: > corrected question > > i have a situation where i have 2 Apache backed server and 2 php-fpm > backed server. i would like to setup up a upstream to include all 4 > back-ends or somehow to balance the requestion on all 4 back-ends back-ends. > is it possible? > > Thanks. > > > Br, > Vuko > > ------------------------------ > *From: *"Vucomir Ianculov via nginx" > *To: *"nginx" > *Cc: *"Vucomir Ianculov" > *Sent: *Wednesday, March 22, 2017 9:37:04 PM > *Subject: *fastcgi_pass and http upstream > > Hi, > > is is possible to have a upstream that contains point to http and phpfpm? > > > > Kind Regards, > *Vucomir* Ianculov > E-Mail: vukomir at ianculov.ro > Phone: (+40) 722 - 690 - 514 > [image: View Vucomir Ianculov's profile on LinkedIn] > > [image: Vucomir Ianculov] > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Mar 22 22:54:51 2017 From: nginx-forum at forum.nginx.org (yolabingo) Date: Wed, 22 Mar 2017 18:54:51 -0400 Subject: proxy_cache_path levels In-Reply-To: <20090528105008.GG11372@forum.nginx.org> References: <20090528105008.GG11372@forum.nginx.org> Message-ID: Results of queries for the same URL using different settings for proxy_cache_path "levels" # levels not specified, so all cache files reside in a single directory proxy_cache_path /data/nginx/cache; /data/nginx/cache/d7b6e5978e3f042f52e875005925e51b # a lot of config examples use levels=1:2 - this provides 16 x 256 = 4096 directories in 2 levels proxy_cache_path /data/nginx/cache levels=1:2; /data/nginx/cache/b/51/d7b6e5978e3f042f52e875005925e51b # levels=1:1:1 also provides 16^3 = 4096 directories, but in 3 levels proxy_cache_path /data/nginx/cache levels=1:1:1; /data/nginx/cache/b/1/5/d7b6e5978e3f042f52e87500592 # levels=2:2:2 provides the maximum possible number of directories 256^3 = ~16 million proxy_cache_path /data/nginx/cache levels=2:2:2; /data/nginx/cache/1b/e5/25/d7b6e5978e3f042f52e875005925e51b # levels=2 proxy_cache_path /data/nginx/cache levels=2; /data/nginx/cache/1b/d7b6e5978e3f042f52e875005925e51b levels= instructs Nginx to create subdirectories within proxy_cache_path You can specify up to 3 colon-separated digits to create up to 3 levels of subdirectory. The colon-separated digits can be either "1" or "2" to define if that level should contain 16 (0-f) or 256 (00-ff) subdirectories. These directory names are a single- double-place hexidecimal value (0-f or 00-ff). Cache files reside in directories that correspond to the last few hex values of the file name. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,2450,273132#msg-273132 From nginx-forum at forum.nginx.org Thu Mar 23 00:44:10 2017 From: nginx-forum at forum.nginx.org (yolabingo) Date: Wed, 22 Mar 2017 20:44:10 -0400 Subject: proxy_cache_path levels In-Reply-To: References: <20090528105008.GG11372@forum.nginx.org> Message-ID: <28ec3eab8ebdb4f82758eb56f8baa142.NginxMailingListEnglish@forum.nginx.org> Correction of a minor error in the previous message: # levels=1:1:1 also provides 16^3 = 4096 directories, but in 3 levels proxy_cache_path /data/nginx/cache levels=1:1:1; /data/nginx/cache/b/1/5/d7b6e5978e3f042f52e875005925e51b Posted at Nginx Forum: https://forum.nginx.org/read.php?2,2450,273133#msg-273133 From nginx-forum at forum.nginx.org Thu Mar 23 05:45:49 2017 From: nginx-forum at forum.nginx.org (mohitmehral) Date: Thu, 23 Mar 2017 01:45:49 -0400 Subject: set_real_ip_from, real_ip_header directive in ngx_http_realip_module In-Reply-To: References: Message-ID: <081bac4a1d21843d3f8d0e78f7c37469.NginxMailingListEnglish@forum.nginx.org> Dear Nishikubo, Have got resolution. In-fact we are facing similar issue while integrated through Akamai. You help is appreciable! Thanks Mohit M Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272653,273134#msg-273134 From vukomir at ianculov.ro Thu Mar 23 07:03:03 2017 From: vukomir at ianculov.ro (Vucomir Ianculov) Date: Thu, 23 Mar 2017 08:03:03 +0100 (CET) Subject: fastcgi_pass and http upstream In-Reply-To: References: <1839306171.950.1490211424022.JavaMail.vukomir@DESKTOP-9I7P6HN> <946720374.959.1490212018452.JavaMail.vukomir@DESKTOP-9I7P6HN> Message-ID: <1965281061.1011.1490252579833.JavaMail.vukomir@DESKTOP-9I7P6HN> HI Yuriy, can you please give me an example of the vhost configuration for this situation? Thanks. ----- Original Message ----- From: "Yuriy Medvedev" To: nginx at nginx.org Cc: "Vucomir Ianculov" Sent: Wednesday, March 22, 2017 9:57:40 PM Subject: Re: fastcgi_pass and http upstream Yes, it is possible. Please read the documentation. 22 ????? 2017 ?. 22:47 ???????????? "Vucomir Ianculov via nginx" < nginx at nginx.org > ???????: corrected question i have a situation where i have 2 Apache backed server and 2 php-fpm backed server. i would like to setup up a upstream to include all 4 back-ends or somehow to balance the requestion on all 4 back-ends back-ends. is it possible? Thanks. Br, Vuko From: "Vucomir Ianculov via nginx" < nginx at nginx.org > To: "nginx" < nginx at nginx.org > Cc: "Vucomir Ianculov" < vukomir at ianculov.ro > Sent: Wednesday, March 22, 2017 9:37:04 PM Subject: fastcgi_pass and http upstream Hi, is is possible to have a upstream that contains point to http and phpfpm? Kind Regards, Vucomir Ianculov E-Mail: vukomir at ianculov.ro Phone: (+40) 722 - 690 - 514 View Vucomir Ianculov's profile on LinkedIn Vucomir Ianculov _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Mar 23 10:23:51 2017 From: nginx-forum at forum.nginx.org (nembo) Date: Thu, 23 Mar 2017 06:23:51 -0400 Subject: Nginx-lua Dynamic location to upstream Message-ID: <5123168f740328736dfef31c5a136305.NginxMailingListEnglish@forum.nginx.org> Hi there, I've Just started to play with Lua (with nginx-plus) and this is what I'd like to achieve: I expose a location and a lua script Will inspect http args.Once It takes the args,It Will call an external api searching for a specific content and It Will writes the returned json values on some lua variables. Based on the returned values,Nginx Will proxy_pass to the right upstream. Does Nginx/luajit cache all json values/pages based on different calls?I don't really want to call an external api for every single location request... Thanks a lot. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273136,273136#msg-273136 From eugene at piatenko.com Thu Mar 23 11:59:55 2017 From: eugene at piatenko.com (Eugene Piatenko) Date: Thu, 23 Mar 2017 13:59:55 +0200 Subject: 302 301 redirect custom HTML content Message-ID: Hello! I'm trying to 1. send people to home page if they enter wrong place (404) 2. it works, but HTML content with headers is: HTTP/1.1 302 Moved Temporarily Server: nginx/1.10.3 Date: Thu, 23 Mar 2017 11:38:08 GMT Content-Type: text/html Content-Length: 161 Location: http://mysite/myhomepage.html Connection: keep-alive 302 Found

302 Found


nginx/1.10.3
Is it possible to send custom page content? My config: server { listen 80; root /somewhere/mysite; location @redirect_to_home { error_page 302 /custom_302_page.html; return 302 /myhomepage.html; } error_page 404 = @redirect_to_home; } Thanks for the advise! -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjlp at sina.com Fri Mar 24 01:06:15 2017 From: tjlp at sina.com (tjlp at sina.com) Date: Fri, 24 Mar 2017 09:06:15 +0800 Subject: =?UTF-8?Q?=E5=9B=9E=E5=A4=8D=EF=BC=9ANginx-lua_Dynamic_location_to_upstrea?= =?UTF-8?Q?m?= Message-ID: <20170324010615.BF7BE4C0090@webmail.sinamail.sina.com.cn> You can refer to the open source Openresty. ----- ???? ----- ????"nembo" ????nginx at nginx.org ???Nginx-lua Dynamic location to upstream ???2017?03?23? 18?24? Hi there, I've Just started to play with Lua (with nginx-plus) and this is what I'd like to achieve: I expose a location and a lua script Will inspect http args.Once It takes the args,It Will call an external api searching for a specific content and It Will writes the returned json values on some lua variables. Based on the returned values,Nginx Will proxy_pass to the right upstream. Does Nginx/luajit cache all json values/pages based on different calls?I don't really want to call an external api for every single location request... Thanks a lot. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273136,273136#msg-273136 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at samad.com.au Fri Mar 24 05:53:55 2017 From: alex at samad.com.au (Alex Samad) Date: Fri, 24 Mar 2017 16:53:55 +1100 Subject: Custom Error pages Message-ID: Hi I got something like this error_page 404 /stderror404.html; location = /stderror400.html { root /var/www/error; content_by_lua_file /var/www/error/stderror400.lua; internal; allow all; } and the lua file has ngx.say( "Your source ip address is: " .. ngx.var.remote_addr .. ":" .. ngx.var.remote_port .. "
"); ngx.say( "You requested URI: " .. ngx.var.uri .. "
"); Question Seems like I have to do a restart to get the lua file reread if I have changed it ... reload doesn't seem to cut it. is there some signal i can send to reread the lua file. ngx.var.uri is always /stderror404.html how can I capture the original uri and print it here ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From r1ch+nginx at teamliquid.net Fri Mar 24 12:31:35 2017 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Fri, 24 Mar 2017 13:31:35 +0100 Subject: 100% CPU use in ngx_http_finalize_connection Message-ID: Hello, I recently moved our site to a new server running Linux 4.9, Debian 8.7 64 bit with nginx 1.11.11 from the nginx repository. Our config is straightforward - epoll, a few proxy backends and a few fastcgi backends, a handful of vhosts, some with HTTP2, geoip module loaded. No AIO, no threads, no timer_resolution. After some time, nginx worker processes are getting stuck at 100% CPU use in what seems to be ngx_http_finalize_connection. New requests hitting the worker are completely stalled. Eventually all nginx workers will become stuck and the sites become unreachable. I'm running older versions of nginx on the same versions of Debian and Linux at other sites without a problem, but the server giving me problems also receives a much larger amount of traffic than the others. Due to the traffic, the debug log gets incredibly large which makes it difficult to isolate the error. I've posted a 1 second excerpt of the core debug log at http://pastebin.com/hqzGzjTV during the time that some of the workers were at 100%, however I'm not sure this contains enough information. I'll look into enabling HTTP level logging if necessary. Has anyone experienced anything similar to this or have any ideas where to start looking to debug this? Thanks. nginx version: nginx/1.11.11 built by gcc 4.9.2 (Debian 4.9.2-10) built with OpenSSL 1.0.1t 3 May 2016 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-g -O2 -fstack-protector-strong -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fPIC' --with-ld-opt='-Wl,-z,relro -Wl,-z,now -Wl,--as-needed -pie' #0 0x000055d533ab87e8 in ngx_pfree (pool=0x55d536202fe0, p=0x55d5361636c0) at src/core/ngx_palloc.c:282 #1 0x000055d533af54d9 in ngx_http_set_keepalive (r=) at src/http/ngx_http_request.c:3000 #2 ngx_http_finalize_connection (r=) at src/http/ngx_http_request.c:2556 #3 0x000055d533af0d8b in ngx_http_core_content_phase (r=0x55d536136f10, ph=0x55d537cbf210) at src/http/ngx_http_core_module.c:1391 #4 0x000055d533aeb29d in ngx_http_core_run_phases (r=r at entry=0x55d536136f10) at src/http/ngx_http_core_module.c:860 #5 0x000055d533aeb392 in ngx_http_handler (r=r at entry=0x55d536136f10) at src/http/ngx_http_core_module.c:843 #6 0x000055d533af669e in ngx_http_process_request (r=0x55d536136f10) at src/http/ngx_http_request.c:1921 #7 0x000055d533adeda4 in ngx_epoll_process_events (cycle=, timer=, flags=) at src/event/modules/ngx_epoll_module.c:902 #8 0x000055d533ad5caa in ngx_process_events_and_timers (cycle=cycle at entry=0x55d5357ba110) at src/event/ngx_event.c:242 #9 0x000055d533adcc31 in ngx_worker_process_cycle (cycle=cycle at entry=0x55d5357ba110, data=data at entry=0x12) at src/os/unix/ngx_process_cycle.c:749 #10 0x000055d533adb583 in ngx_spawn_process (cycle=cycle at entry=0x55d5357ba110, proc=proc at entry=0x55d533adcbb0 , data=data at entry=0x12, name=name at entry=0x55d533b71db0 "worker process", respawn=respawn at entry=-4) at src/os/unix/ngx_process.c:198 #11 0x000055d533adce50 in ngx_start_worker_processes (cycle=0x55d5357ba110, n=24, type=-4) at src/os/unix/ngx_process_cycle.c:358 #12 0x000055d533addae7 in ngx_master_process_cycle (cycle=0x55d5357ba110) at src/os/unix/ngx_process_cycle.c:243 #13 0x000055d533ab5e56 in main (argc=, argv=) at src/core/nginx.c:375 From r1ch+nginx at teamliquid.net Fri Mar 24 13:03:37 2017 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Fri, 24 Mar 2017 14:03:37 +0100 Subject: 100% CPU use in ngx_http_finalize_connection In-Reply-To: References: Message-ID: I caught another loop, this time using nginx-debug with source. It seems it is stuck in a loop trying to ngx_pfree something that is already freed? I don't really understand the source enough to know what's going on, but the parameters to ngx_pfree are the same every time and the code keeps looping over this part. (gdb) frame 0 #0 0x0000555e0d19cfce in ngx_http_set_keepalive (r=0x555e0f52c1b0) at src/http/ngx_http_request.c:2987 2987 ngx_free_chain(c->pool, ln); (gdb) list 2982 if (hc->free) { 2983 for (cl = hc->free; cl; /* void */) { 2984 ln = cl; 2985 cl = cl->next; 2986 ngx_pfree(c->pool, ln->buf->start); 2987 ngx_free_chain(c->pool, ln); 2988 } 2989 2990 hc->free = NULL; 2991 } (gdb) s 2983 for (cl = hc->free; cl; /* void */) { (gdb) 2986 ngx_pfree(c->pool, ln->buf->start); (gdb) 2985 cl = cl->next; (gdb) 2986 ngx_pfree(c->pool, ln->buf->start); (gdb) ngx_pfree (pool=0x555e0db19680, p=0x555e0f52d790) at src/core/ngx_palloc.c:279 279 { (gdb) 282 for (l = pool->large; l; l = l->next) { (gdb) 293 return NGX_DECLINED; (gdb) 282 for (l = pool->large; l; l = l->next) { (gdb) 283 if (p == l->alloc) { (gdb) 283 if (p == l->alloc) { (gdb) 283 if (p == l->alloc) { (gdb) 293 return NGX_DECLINED; (gdb) 294 } (gdb) ngx_http_set_keepalive (r=0xfffffffffffffffb) at src/http/ngx_http_request.c:2987 2987 ngx_free_chain(c->pool, ln); (gdb) 2983 for (cl = hc->free; cl; /* void */) { (gdb) 2987 ngx_free_chain(c->pool, ln); (gdb) 2983 for (cl = hc->free; cl; /* void */) { (gdb) 2986 ngx_pfree(c->pool, ln->buf->start); (gdb) 2985 cl = cl->next; (gdb) 2986 ngx_pfree(c->pool, ln->buf->start); (gdb) ngx_pfree (pool=0x555e0db19680, p=0x555e0f52d790) at src/core/ngx_palloc.c:279 279 { (gdb) 282 for (l = pool->large; l; l = l->next) { (gdb) 293 return NGX_DECLINED; (gdb) 282 for (l = pool->large; l; l = l->next) { (gdb) 283 if (p == l->alloc) { (gdb) 283 if (p == l->alloc) { (gdb) 283 if (p == l->alloc) { (gdb) 293 return NGX_DECLINED; (gdb) 294 } (gdb) ngx_http_set_keepalive (r=0xfffffffffffffffb) at src/http/ngx_http_request.c:2987 2987 ngx_free_chain(c->pool, ln); (gdb) 2983 for (cl = hc->free; cl; /* void */) { (gdb) 2987 ngx_free_chain(c->pool, ln); (gdb) 2983 for (cl = hc->free; cl; /* void */) { (gdb) 2986 ngx_pfree(c->pool, ln->buf->start); (gdb) 2985 cl = cl->next; (gdb) 2986 ngx_pfree(c->pool, ln->buf->start); (gdb) ngx_pfree (pool=0x555e0db19680, p=0x555e0f52d790) at src/core/ngx_palloc.c:279 279 { (gdb) 282 for (l = pool->large; l; l = l->next) { (gdb) 293 return NGX_DECLINED; (gdb) 282 for (l = pool->large; l; l = l->next) { (gdb) 283 if (p == l->alloc) { (gdb) 283 if (p == l->alloc) { (gdb) 283 if (p == l->alloc) { (gdb) 293 return NGX_DECLINED; (gdb) 294 } (gdb) ngx_http_set_keepalive (r=0xfffffffffffffffb) at src/http/ngx_http_request.c:2987 2987 ngx_free_chain(c->pool, ln); (gdb) 2983 for (cl = hc->free; cl; /* void */) { (gdb) 2987 ngx_free_chain(c->pool, ln); (gdb) 2983 for (cl = hc->free; cl; /* void */) { (gdb) 2986 ngx_pfree(c->pool, ln->buf->start); (gdb) 2985 cl = cl->next; (gdb) 2986 ngx_pfree(c->pool, ln->buf->start); (gdb) ngx_pfree (pool=0x555e0db19680, p=0x555e0f52d790) at src/core/ngx_palloc.c:279 279 { (gdb) 282 for (l = pool->large; l; l = l->next) { (gdb) 293 return NGX_DECLINED; (gdb) 282 for (l = pool->large; l; l = l->next) { (gdb) 283 if (p == l->alloc) { (gdb) 283 if (p == l->alloc) { (gdb) 283 if (p == l->alloc) { (gdb) 293 return NGX_DECLINED; (gdb) 294 } (gdb) ngx_http_set_keepalive (r=0xfffffffffffffffb) at src/http/ngx_http_request.c:2987 2987 ngx_free_chain(c->pool, ln); (gdb) 2983 for (cl = hc->free; cl; /* void */) { (gdb) 2987 ngx_free_chain(c->pool, ln); (and so on...) On Fri, Mar 24, 2017 at 1:31 PM, Richard Stanway wrote: > Hello, > I recently moved our site to a new server running Linux 4.9, Debian > 8.7 64 bit with nginx 1.11.11 from the nginx repository. Our config is > straightforward - epoll, a few proxy backends and a few fastcgi > backends, a handful of vhosts, some with HTTP2, geoip module loaded. > No AIO, no threads, no timer_resolution. > > After some time, nginx worker processes are getting stuck at 100% CPU > use in what seems to be ngx_http_finalize_connection. New requests > hitting the worker are completely stalled. Eventually all nginx > workers will become stuck and the sites become unreachable. > > I'm running older versions of nginx on the same versions of Debian and > Linux at other sites without a problem, but the server giving me > problems also receives a much larger amount of traffic than the > others. Due to the traffic, the debug log gets incredibly large which > makes it difficult to isolate the error. I've posted a 1 second > excerpt of the core debug log at http://pastebin.com/hqzGzjTV during > the time that some of the workers were at 100%, however I'm not sure > this contains enough information. I'll look into enabling HTTP level > logging if necessary. > > Has anyone experienced anything similar to this or have any ideas > where to start looking to debug this? > > Thanks. > > nginx version: nginx/1.11.11 > built by gcc 4.9.2 (Debian 4.9.2-10) > built with OpenSSL 1.0.1t 3 May 2016 > TLS SNI support enabled > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx > --modules-path=/usr/lib/nginx/modules > --conf-path=/etc/nginx/nginx.conf > --error-log-path=/var/log/nginx/error.log > --http-log-path=/var/log/nginx/access.log > --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock > --http-client-body-temp-path=/var/cache/nginx/client_temp > --http-proxy-temp-path=/var/cache/nginx/proxy_temp > --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp > --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp > --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx > --group=nginx --with-compat --with-file-aio --with-threads > --with-http_addition_module --with-http_auth_request_module > --with-http_dav_module --with-http_flv_module > --with-http_gunzip_module --with-http_gzip_static_module > --with-http_mp4_module --with-http_random_index_module > --with-http_realip_module --with-http_secure_link_module > --with-http_slice_module --with-http_ssl_module > --with-http_stub_status_module --with-http_sub_module > --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream > --with-stream_realip_module --with-stream_ssl_module > --with-stream_ssl_preread_module --with-cc-opt='-g -O2 > -fstack-protector-strong -Wformat -Werror=format-security > -Wp,-D_FORTIFY_SOURCE=2 -fPIC' --with-ld-opt='-Wl,-z,relro -Wl,-z,now > -Wl,--as-needed -pie' > > > #0 0x000055d533ab87e8 in ngx_pfree (pool=0x55d536202fe0, > p=0x55d5361636c0) at src/core/ngx_palloc.c:282 > #1 0x000055d533af54d9 in ngx_http_set_keepalive (r=) > at src/http/ngx_http_request.c:3000 > #2 ngx_http_finalize_connection (r=) at > src/http/ngx_http_request.c:2556 > #3 0x000055d533af0d8b in ngx_http_core_content_phase > (r=0x55d536136f10, ph=0x55d537cbf210) at > src/http/ngx_http_core_module.c:1391 > #4 0x000055d533aeb29d in ngx_http_core_run_phases > (r=r at entry=0x55d536136f10) at src/http/ngx_http_core_module.c:860 > #5 0x000055d533aeb392 in ngx_http_handler (r=r at entry=0x55d536136f10) > at src/http/ngx_http_core_module.c:843 > #6 0x000055d533af669e in ngx_http_process_request (r=0x55d536136f10) > at src/http/ngx_http_request.c:1921 > #7 0x000055d533adeda4 in ngx_epoll_process_events (cycle= out>, timer=, flags=) at > src/event/modules/ngx_epoll_module.c:902 > #8 0x000055d533ad5caa in ngx_process_events_and_timers > (cycle=cycle at entry=0x55d5357ba110) at src/event/ngx_event.c:242 > #9 0x000055d533adcc31 in ngx_worker_process_cycle > (cycle=cycle at entry=0x55d5357ba110, data=data at entry=0x12) at > src/os/unix/ngx_process_cycle.c:749 > #10 0x000055d533adb583 in ngx_spawn_process > (cycle=cycle at entry=0x55d5357ba110, proc=proc at entry=0x55d533adcbb0 > , data=data at entry=0x12, > name=name at entry=0x55d533b71db0 "worker process", > respawn=respawn at entry=-4) at src/os/unix/ngx_process.c:198 > #11 0x000055d533adce50 in ngx_start_worker_processes > (cycle=0x55d5357ba110, n=24, type=-4) at > src/os/unix/ngx_process_cycle.c:358 > #12 0x000055d533addae7 in ngx_master_process_cycle > (cycle=0x55d5357ba110) at src/os/unix/ngx_process_cycle.c:243 > #13 0x000055d533ab5e56 in main (argc=, argv= out>) at src/core/nginx.c:375 From mdounin at mdounin.ru Fri Mar 24 13:03:53 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 24 Mar 2017 16:03:53 +0300 Subject: 100% CPU use in ngx_http_finalize_connection In-Reply-To: References: Message-ID: <20170324130353.GE13617@mdounin.ru> Hello! On Fri, Mar 24, 2017 at 01:31:35PM +0100, Richard Stanway wrote: > Hello, > I recently moved our site to a new server running Linux 4.9, Debian > 8.7 64 bit with nginx 1.11.11 from the nginx repository. Our config is > straightforward - epoll, a few proxy backends and a few fastcgi > backends, a handful of vhosts, some with HTTP2, geoip module loaded. > No AIO, no threads, no timer_resolution. > > After some time, nginx worker processes are getting stuck at 100% CPU > use in what seems to be ngx_http_finalize_connection. New requests > hitting the worker are completely stalled. Eventually all nginx > workers will become stuck and the sites become unreachable. > > I'm running older versions of nginx on the same versions of Debian and > Linux at other sites without a problem, but the server giving me > problems also receives a much larger amount of traffic than the > others. Due to the traffic, the debug log gets incredibly large which > makes it difficult to isolate the error. I've posted a 1 second > excerpt of the core debug log at http://pastebin.com/hqzGzjTV during > the time that some of the workers were at 100%, however I'm not sure > this contains enough information. I'll look into enabling HTTP level > logging if necessary. > > Has anyone experienced anything similar to this or have any ideas > where to start looking to debug this? > > Thanks. > > nginx version: nginx/1.11.11 > built by gcc 4.9.2 (Debian 4.9.2-10) [...] > #0 0x000055d533ab87e8 in ngx_pfree (pool=0x55d536202fe0, > p=0x55d5361636c0) at src/core/ngx_palloc.c:282 > #1 0x000055d533af54d9 in ngx_http_set_keepalive (r=) > at src/http/ngx_http_request.c:3000 > #2 ngx_http_finalize_connection (r=) at > src/http/ngx_http_request.c:2556 > #3 0x000055d533af0d8b in ngx_http_core_content_phase > (r=0x55d536136f10, ph=0x55d537cbf210) at > src/http/ngx_http_core_module.c:1391 I think I see the problem. Please try the following patch: diff --git a/src/http/ngx_http_request.c b/src/http/ngx_http_request.c --- a/src/http/ngx_http_request.c +++ b/src/http/ngx_http_request.c @@ -2904,6 +2904,7 @@ ngx_http_set_keepalive(ngx_http_request_ } cl->buf = b; + cl->next = NULL; hc->busy = cl; hc->nbusy = 1; -- Maxim Dounin http://nginx.org/ From r1ch+nginx at teamliquid.net Fri Mar 24 13:11:12 2017 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Fri, 24 Mar 2017 14:11:12 +0100 Subject: 100% CPU use in ngx_http_finalize_connection In-Reply-To: <20170324130353.GE13617@mdounin.ru> References: <20170324130353.GE13617@mdounin.ru> Message-ID: Hi Maxim, Thanks for the quick patch! I've applied it to our server and will monitor the results. Usually the problem starts to occur within 1-2 hours of a restart, so I'll post again later today with an update. On Fri, Mar 24, 2017 at 2:03 PM, Maxim Dounin wrote: > Hello! > > On Fri, Mar 24, 2017 at 01:31:35PM +0100, Richard Stanway wrote: > >> Hello, >> I recently moved our site to a new server running Linux 4.9, Debian >> 8.7 64 bit with nginx 1.11.11 from the nginx repository. Our config is >> straightforward - epoll, a few proxy backends and a few fastcgi >> backends, a handful of vhosts, some with HTTP2, geoip module loaded. >> No AIO, no threads, no timer_resolution. >> >> After some time, nginx worker processes are getting stuck at 100% CPU >> use in what seems to be ngx_http_finalize_connection. New requests >> hitting the worker are completely stalled. Eventually all nginx >> workers will become stuck and the sites become unreachable. >> >> I'm running older versions of nginx on the same versions of Debian and >> Linux at other sites without a problem, but the server giving me >> problems also receives a much larger amount of traffic than the >> others. Due to the traffic, the debug log gets incredibly large which >> makes it difficult to isolate the error. I've posted a 1 second >> excerpt of the core debug log at http://pastebin.com/hqzGzjTV during >> the time that some of the workers were at 100%, however I'm not sure >> this contains enough information. I'll look into enabling HTTP level >> logging if necessary. >> >> Has anyone experienced anything similar to this or have any ideas >> where to start looking to debug this? >> >> Thanks. >> >> nginx version: nginx/1.11.11 >> built by gcc 4.9.2 (Debian 4.9.2-10) > > [...] > >> #0 0x000055d533ab87e8 in ngx_pfree (pool=0x55d536202fe0, >> p=0x55d5361636c0) at src/core/ngx_palloc.c:282 >> #1 0x000055d533af54d9 in ngx_http_set_keepalive (r=) >> at src/http/ngx_http_request.c:3000 >> #2 ngx_http_finalize_connection (r=) at >> src/http/ngx_http_request.c:2556 >> #3 0x000055d533af0d8b in ngx_http_core_content_phase >> (r=0x55d536136f10, ph=0x55d537cbf210) at >> src/http/ngx_http_core_module.c:1391 > > I think I see the problem. > Please try the following patch: > > diff --git a/src/http/ngx_http_request.c b/src/http/ngx_http_request.c > --- a/src/http/ngx_http_request.c > +++ b/src/http/ngx_http_request.c > @@ -2904,6 +2904,7 @@ ngx_http_set_keepalive(ngx_http_request_ > } > > cl->buf = b; > + cl->next = NULL; > > hc->busy = cl; > hc->nbusy = 1; > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Fri Mar 24 15:18:59 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 24 Mar 2017 18:18:59 +0300 Subject: nginx-1.11.12 Message-ID: <20170324151859.GI13617@mdounin.ru> Changes with nginx 1.11.12 24 Mar 2017 *) Bugfix: nginx might hog CPU; the bug had appeared in 1.11.11. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Fri Mar 24 15:26:44 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 24 Mar 2017 18:26:44 +0300 Subject: 100% CPU use in ngx_http_finalize_connection In-Reply-To: References: <20170324130353.GE13617@mdounin.ru> Message-ID: <20170324152644.GM13617@mdounin.ru> Hello! On Fri, Mar 24, 2017 at 02:11:12PM +0100, Richard Stanway wrote: > Hi Maxim, > Thanks for the quick patch! I've applied it to our server and will > monitor the results. Usually the problem starts to occur within 1-2 > hours of a restart, so I'll post again later today with an update. A version with the fix was released, nginx 1.11.12. Thanks for reporting this. -- Maxim Dounin http://nginx.org/ From r1ch+nginx at teamliquid.net Fri Mar 24 17:49:15 2017 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Fri, 24 Mar 2017 18:49:15 +0100 Subject: 100% CPU use in ngx_http_finalize_connection In-Reply-To: <20170324152644.GM13617@mdounin.ru> References: <20170324130353.GE13617@mdounin.ru> <20170324152644.GM13617@mdounin.ru> Message-ID: Thanks Maxim, everything is looking great after the patch. On Fri, Mar 24, 2017 at 4:26 PM, Maxim Dounin wrote: > Hello! > > On Fri, Mar 24, 2017 at 02:11:12PM +0100, Richard Stanway wrote: > >> Hi Maxim, >> Thanks for the quick patch! I've applied it to our server and will >> monitor the results. Usually the problem starts to occur within 1-2 >> hours of a restart, so I'll post again later today with an update. > > A version with the fix was released, nginx 1.11.12. > Thanks for reporting this. > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Fri Mar 24 18:34:06 2017 From: nginx-forum at forum.nginx.org (nginxsantos) Date: Fri, 24 Mar 2017 14:34:06 -0400 Subject: HTTP To TCP Conversion In-Reply-To: <0c9fd570-a3ea-51d5-9f93-66e0de100879@nginx.com> References: <0c9fd570-a3ea-51d5-9f93-66e0de100879@nginx.com> Message-ID: <5184478ebe0aaf6a7214c48d80747529.NginxMailingListEnglish@forum.nginx.org> Thank you. Can you please share more info on this please. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273098,273181#msg-273181 From nginx-forum at forum.nginx.org Fri Mar 24 21:18:23 2017 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Fri, 24 Mar 2017 17:18:23 -0400 Subject: Nginx cookie map regex remove + character Message-ID: <1f42674cd2c11aa7d9eed7cc33fb3887.NginxMailingListEnglish@forum.nginx.org> So this is my map map $http_cookie $session_id_value { default ''; "~^.*[0-9a-f]{32}\=(?[\w]{1,}+).*$" $session_value; } The cookie name = a MD5 sum the full / complete value of the cookie seems to cut of at a plus + symbol What would the correct regex to be to ignore / remove + symbols from "session_value" Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273182,273182#msg-273182 From nginx-forum at forum.nginx.org Sat Mar 25 10:20:07 2017 From: nginx-forum at forum.nginx.org (George) Date: Sat, 25 Mar 2017 06:20:07 -0400 Subject: nginx 1.11.12 + nginScript = failed to restart nginx server Message-ID: Nginx compiles successfully with nginScript as a dynamic module. nginx -V nginx version: nginx/1.11.12 built by clang 3.4.2 (tags/RELEASE_34/dot2-final) built with LibreSSL 2.4.5 TLS SNI support enabled configure arguments: --with-ld-opt='-lrt -ljemalloc -Wl,-z,relro -Wl,-rpath,/usr/local/lib' --with-cc-opt='-m64 -mtune=native -mfpmath=sse -g -O3 -fstack-protector -fuse-ld=gold --param=ssp-buffer-size=4 -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wno-sign-compare -Wno-string-plus-int -Wno-deprecated-declarations -Wno-unused-parameter -Wno-unused-const-variable -Wno-conditional-uninitialized -Wno-mismatched-tags -Wno-sometimes-uninitialized -Wno-parentheses-equality -Wno-tautological-compare -Wno-self-assign -Wno-deprecated-register -Wno-deprecated -Wno-invalid-source-encoding -Wno-pointer-sign -Wno-parentheses -Wno-enum-conversion -Wno-c++11-compat-deprecated-writable-strings -Wno-write-strings -gsplit-dwarf' --sbin-path=/usr/local/sbin/nginx --conf-path=/usr/local/nginx/conf/nginx.conf --with-http_stub_status_module --with-http_secure_link_module --with-libatomic --with-http_gzip_static_module --add-dynamic-module=../ngx_brotli --with-http_sub_module --with-http_addition_module --with-http_image_filter_module=dynamic --with-http_geoip_module --add-dynamic-module=../njs/nginx --with-stream_geoip_module --with-stream_realip_module --with-stream_ssl_preread_module --with-threads --with-stream=dynamic --with-stream_ssl_module --with-http_realip_module --add-dynamic-module=../ngx-fancyindex-0.4.0 --add-module=../ngx_cache_purge-2.3 --add-module=../ngx_devel_kit-0.3.0 --add-module=../set-misc-nginx-module-0.31 --add-module=../echo-nginx-module-0.60 --add-module=../redis2-nginx-module-0.13 --add-module=../ngx_http_redis-0.3.7 --add-module=../memc-nginx-module-0.17 --add-module=../srcache-nginx-module-0.31 --add-module=../headers-more-nginx-module-0.32 --with-pcre=../pcre-8.40 --with-pcre-jit --with-zlib=../zlib-1.2.11 --with-http_ssl_module --with-http_v2_module --with-openssl=../libressl-2.4.5 But I've tried both the example nginScript configurations at https://www.nginx.com/blog/introduction-nginscript/ as well as example posted at http://nginx.org/en/docs/http/ngx_http_js_module.html. But both fail to restart nginx server with no indication as to why https://community.centminmod.com/posts/46868/ nginx -t nginx: configuration file /usr/local/nginx/conf/nginx.conf test failed is all I see nothing else ? loaded nginScript module as a dynamic module via include file /usr/local/nginx/conf/dynamic-modules.conf in nginx.conf cat /usr/local/nginx/conf/dynamic-modules.conf load_module "modules/ngx_http_image_filter_module.so"; load_module "modules/ngx_http_fancyindex_module.so"; load_module "modules/ngx_http_brotli_filter_module.so"; load_module "modules/ngx_http_brotli_static_module.so"; load_module "modules/ngx_stream_module.so"; load_module "modules/ngx_http_js_module.so"; load_module "modules/ngx_stream_js_module.so"; nginx.conf excerpt user nginx nginx; worker_processes 4; worker_priority -10; worker_rlimit_nofile 260000; timer_resolution 100ms; pcre_jit on; include /usr/local/nginx/conf/dynamic-modules.conf; pid logs/nginx.pid; events { worker_connections 10000; accept_mutex off; accept_mutex_delay 200ms; use epoll; #multi_accept on; } http { Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273183,273183#msg-273183 From joecurtis at f2s.com Sat Mar 25 15:59:37 2017 From: joecurtis at f2s.com (Joe Curtis) Date: Sat, 25 Mar 2017 15:59:37 +0000 Subject: Strange LAN access error Message-ID: <20ad638b-3929-618f-7648-85e4e725398e@f2s.com> On my LAN I have 3 windows 10 pc's, 1 Linux PC( Fedora 25 with apache2), 1 Raspberry Pi2b (Moode audio) and 2 Raspberry Pi3B's( NGINX), all behind a TalkTalk router with a mixture of ethernet and Wi-Fi links. One of the Pi3B's is running Cumulus software linked to a weather station and has a nginx sever hosting a weather web site (www.craythorneweather.info) The other Pi3B has a nginx server acting as proxy server to pass weather requests to the weather Pi3 or to a community site on the fedora machine as appropriate. The second Pi3 in addition to acting as a proxy server also hosts its own (development) web site. I can get to all the web sites externally from the WAN but have hit a snag accessing locally on the LAN. Using a browser (Firefox) on one of the pc's I can get to the Fedora server or the weather Pi3 server using the LAN ip address without a problem, if I try the lan address of the other Pi3 the browser appends www.craythorneweather.info many hundreds of times to the URI until I get a '414 Request-URI Too Large' error message from the nginx server (1.10.3). Any help getting to the bottom of this problem would be greatly appreciated. Thanks, Joe Curtis From tolga.ceylan at gmail.com Sat Mar 25 20:58:41 2017 From: tolga.ceylan at gmail.com (Tolga Ceylan) Date: Sat, 25 Mar 2017 13:58:41 -0700 Subject: Strange LAN access error In-Reply-To: <20ad638b-3929-618f-7648-85e4e725398e@f2s.com> References: <20ad638b-3929-618f-7648-85e4e725398e@f2s.com> Message-ID: This is difficult to assess without specific details of your environment, but assuming that your http headers are not really that large, 414 could be a sign that you perhaps have a loop somewhere and forwarding requests between your servers until you hit 414. Again this is assuming that you have a 'proxy-set-header' in one of these proxies, which eventually overflows the http header. Perhaps your local LAN DNS is not setup correctly, so your router and/or proxies form a traffic loop. On Sat, Mar 25, 2017 at 8:59 AM, Joe Curtis wrote: > On my LAN I have 3 windows 10 pc's, 1 Linux PC( Fedora 25 with apache2), 1 > Raspberry Pi2b (Moode audio) and 2 Raspberry Pi3B's( NGINX), all behind a > TalkTalk router with a mixture of ethernet and Wi-Fi links. One of the > Pi3B's is running Cumulus software linked to a weather station and has a > nginx sever hosting a weather web site (www.craythorneweather.info) The > other Pi3B has a nginx server acting as proxy server to pass weather > requests to the weather Pi3 or to a community site on the fedora machine as > appropriate. The second Pi3 in addition to acting as a proxy server also > hosts its own (development) web site. > > I can get to all the web sites externally from the WAN but have hit a snag > accessing locally on the LAN. Using a browser (Firefox) on one of the pc's I > can get to the Fedora server or the weather Pi3 server using the LAN ip > address without a problem, if I try the lan address of the other Pi3 the > browser appends www.craythorneweather.info many hundreds of times to the URI > until I get a '414 Request-URI Too Large' error message from the nginx > server (1.10.3). > > Any help getting to the bottom of this problem would be greatly appreciated. > > Thanks, > > Joe Curtis > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Sun Mar 26 06:20:45 2017 From: nginx-forum at forum.nginx.org (nginxsantos) Date: Sun, 26 Mar 2017 02:20:45 -0400 Subject: HTTP To TCP Conversion In-Reply-To: <250638451f747492a2e999de027298bb.NginxMailingListEnglish@forum.nginx.org> References: <250638451f747492a2e999de027298bb.NginxMailingListEnglish@forum.nginx.org> Message-ID: <50053867fa1e9cbc4dac1c1cf4bbbf14.NginxMailingListEnglish@forum.nginx.org> Thanks. Is the LUA capable enough to exchange between HTTP module to TCP module? Any reference? Thanks. Santos Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273098,273197#msg-273197 From sca at andreasschulze.de Sun Mar 26 19:40:08 2017 From: sca at andreasschulze.de (A. Schulze) Date: Sun, 26 Mar 2017 21:40:08 +0200 Subject: echo-nginx-module and 1.11.12 (was: echo-nginx-module and HTTP2) In-Reply-To: References: <20160128104504.Horde.gmBxk8Ku529Cc0xauggGzQ_@andreasschulze.de> <20160129081953.Horde.jBSLaoLM_b7TSuovLq5A04h@andreasschulze.de> Message-ID: Am 01.02.2016 um 23:53 schrieb Yichun Zhang (agentzh): > Hello! > > On Fri, Jan 29, 2016 at 8:40 PM, Kurt Cancemi wrote: >> I was doing some debugging and though I haven't found a fix. The problem is >> in the ngx_http_echo_client_request_headers_variable() function c->buffer is >> NULL when http v2 is used for some reason (internal to nginx). >> > > This is expected since the HTTP/2 mode of NGINX reads the request > header into a different place. We should branch the code accordingly. > > Regards, > -agentzh Hello, unfortunately the module fail to compile on 1.11.12 while compiling was successfully up to 1.11.10 cc -c -g -O2 -fdebug-prefix-map=/<>=. -fstack-protector-strong -Wformat -Werror=format-security -g -O2 -fdebug-prefix-map=/<>=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -I src/core -I src/event -I src/event/modules -I src/os/unix -I /usr/include -I objs -I src/http -I src/http/modules -I src/http/v2 -I src/stream \ -o objs/addon/src/ngx_http_echo_request_info.o \ ./echo-nginx-module-0.60//src/ngx_http_echo_request_info.c ./echo-nginx-module-0.60//src/ngx_http_echo_request_info.c: In function 'ngx_http_echo_client_request_headers_variable': ./echo-nginx-module-0.60//src/ngx_http_echo_request_info.c:219:15: error: incompatible types when assigning to type 'ngx_buf_t * {aka struct ngx_buf_s *}' from type 'ngx_chain_t {aka struct ngx_chain_s}' b = hc->busy[i]; ^ ./echo-nginx-module-0.60//src/ngx_http_echo_request_info.c:284:15: error: incompatible types when assigning to type 'ngx_buf_t * {aka struct ngx_buf_s *}' from type 'ngx_chain_t {aka struct ngx_chain_s}' b = hc->busy[i]; ^ objs/Makefile:1523: recipe for target 'objs/addon/src/ngx_http_echo_request_info.o' failed I guess, something changed from 1.11.10 to 1.11.12 ... Andreas From lucas at slcoding.com Sun Mar 26 19:45:22 2017 From: lucas at slcoding.com (Lucas Rolff) Date: Sun, 26 Mar 2017 21:45:22 +0200 Subject: echo-nginx-module and 1.11.12 In-Reply-To: References: <20160128104504.Horde.gmBxk8Ku529Cc0xauggGzQ_@andreasschulze.de> <20160129081953.Horde.jBSLaoLM_b7TSuovLq5A04h@andreasschulze.de> Message-ID: <58D81A52.6000602@slcoding.com> When the pull request here gets merged into master, then it should work on 1.11.11 (and 1.11.12): https://github.com/openresty/echo-nginx-module/pull/65 A. Schulze wrote: > Am 01.02.2016 um 23:53 schrieb Yichun Zhang (agentzh): >> Hello! >> >> On Fri, Jan 29, 2016 at 8:40 PM, Kurt Cancemi wrote: >>> I was doing some debugging and though I haven't found a fix. The problem is >>> in the ngx_http_echo_client_request_headers_variable() function c->buffer is >>> NULL when http v2 is used for some reason (internal to nginx). >>> >> This is expected since the HTTP/2 mode of NGINX reads the request >> header into a different place. We should branch the code accordingly. >> >> Regards, >> -agentzh > > Hello, > > unfortunately the module fail to compile on 1.11.12 > while compiling was successfully up to 1.11.10 > > cc -c -g -O2 -fdebug-prefix-map=/<>=. -fstack-protector-strong -Wformat -Werror=format-security -g -O2 -fdebug-prefix-map=/<>=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -I src/core -I src/event -I src/event/modules -I src/os/unix -I /usr/include -I objs -I src/http -I src/http/modules -I src/http/v2 -I src/stream \ > -o objs/addon/src/ngx_http_echo_request_info.o \ > ./echo-nginx-module-0.60//src/ngx_http_echo_request_info.c > ./echo-nginx-module-0.60//src/ngx_http_echo_request_info.c: In function 'ngx_http_echo_client_request_headers_variable': > ./echo-nginx-module-0.60//src/ngx_http_echo_request_info.c:219:15: error: incompatible types when assigning to type 'ngx_buf_t * {aka struct ngx_buf_s *}' from type 'ngx_chain_t {aka struct ngx_chain_s}' > b = hc->busy[i]; > ^ > ./echo-nginx-module-0.60//src/ngx_http_echo_request_info.c:284:15: error: incompatible types when assigning to type 'ngx_buf_t * {aka struct ngx_buf_s *}' from type 'ngx_chain_t {aka struct ngx_chain_s}' > b = hc->busy[i]; > ^ > objs/Makefile:1523: recipe for target 'objs/addon/src/ngx_http_echo_request_info.o' failed > > I guess, something changed from 1.11.10 to 1.11.12 ... > > Andreas > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From kworthington at gmail.com Mon Mar 27 15:01:58 2017 From: kworthington at gmail.com (Kevin Worthington) Date: Mon, 27 Mar 2017 11:01:58 -0400 Subject: [nginx-announce] nginx-1.11.12 In-Reply-To: <20170324151903.GJ13617@mdounin.ru> References: <20170324151903.GJ13617@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.11.12 for Windows https://kevinworthington.com/nginxwin11112 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Fri, Mar 24, 2017 at 11:19 AM, Maxim Dounin wrote: > Changes with nginx 1.11.12 24 Mar > 2017 > > *) Bugfix: nginx might hog CPU; the bug had appeared in 1.11.11. > > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Mar 27 20:28:35 2017 From: nginx-forum at forum.nginx.org (piroaa) Date: Mon, 27 Mar 2017 16:28:35 -0400 Subject: SSL client certyficage Message-ID: <51e79eedb74a83527a47fa705e3aecf5.NginxMailingListEnglish@forum.nginx.org> Hi. I have own cloud server with ssl client cert verification ssl_verify_client set to on. How I can disable verification for location/index.php/s/ share links ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273239,273239#msg-273239 From alex at samad.com.au Mon Mar 27 23:44:12 2017 From: alex at samad.com.au (Alex Samad) Date: Tue, 28 Mar 2017 10:44:12 +1100 Subject: SSL client certyficage In-Reply-To: <51e79eedb74a83527a47fa705e3aecf5.NginxMailingListEnglish@forum.nginx.org> References: <51e79eedb74a83527a47fa705e3aecf5.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi If you asking if some part of the tree can have no ssl client verification, then no https://a.b.c.d/ https://a.b.c.d/This/Some https://a.b.c.d/Not/here Once you turn on client verififcation its on for / and down, no way to turn it off for https://a.b.c.d/Not/here of its on. Shame, I would like to see this feature, but not possible with current code base, I understand. Alex On 28 March 2017 at 07:28, piroaa wrote: > Hi. > I have own cloud server with ssl client cert verification ssl_verify_client > set to on. How I can disable verification for location/index.php/s/ share > links ? > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,273239,273239#msg-273239 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From acgtek at yahoo.com Tue Mar 28 01:43:41 2017 From: acgtek at yahoo.com (Jun Chen) Date: Tue, 28 Mar 2017 01:43:41 +0000 (UTC) Subject: How to exact match a nginx location? References: <420259965.5126672.1490665421463.ref@mail.yahoo.com> Message-ID: <420259965.5126672.1490665421463@mail.yahoo.com> | I am configuring a nginx revser proxy. The result should be when user type http://10.21.169.13/mini, then the request should be proxy_pass to 192.168.1.56:5000. Here is the nginx config:server { listen 80; server_name 10.21.169.13; location = /mini { proxy_pass http://192.168.1.65:5000; include /etc/nginx/proxy_params; } } The above location block never worked with http://10.21.169.13/mini. The only location block worked is:server { listen 80; server_name 10.21.169.13; location / { proxy_pass http://192.168.1.65:5000; include /etc/nginx/proxy_params; } } But the above config also match http://10.21.169.13 request which is too board. What kind of location block will only match 'http://10.21.169.13/mini` and no more? | -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at samad.com.au Tue Mar 28 04:07:57 2017 From: alex at samad.com.au (Alex Samad) Date: Tue, 28 Mar 2017 15:07:57 +1100 Subject: How to exact match a nginx location? In-Reply-To: <420259965.5126672.1490665421463@mail.yahoo.com> References: <420259965.5126672.1490665421463.ref@mail.yahoo.com> <420259965.5126672.1490665421463@mail.yahoo.com> Message-ID: so (have a stab at this) location = /mini { equals http://10.21.169.13/mini and not http://10.21.169.13/mini/ or anything else http://10.21.169.13/mini/* try location /mini { or location /mini/ { A On 28 March 2017 at 12:43, Jun Chen via nginx wrote: > > I am configuring a nginx revser proxy. The result should be when user type > http://10.21.169.13/mini, then the request should be proxy_pass to > 192.168.1.56:5000. Here is the nginx config: > > server { > listen 80; > server_name 10.21.169.13; > > location = /mini { > proxy_pass http://192.168.1.65:5000; > include /etc/nginx/proxy_params; > } > } > > The above location block never worked with http://10.21.169.13/mini. The > only location block worked is: > > server { > listen 80; > server_name 10.21.169.13; > > location / { > proxy_pass http://192.168.1.65:5000; > include /etc/nginx/proxy_params; > } > } > > But the above config also match http://10.21.169.13 request which is too > board. > What kind of location block will only match 'http://10.21.169.13/mini` > and no more? > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at ohlste.in Tue Mar 28 04:17:14 2017 From: jim at ohlste.in (Jim Ohlstein) Date: Tue, 28 Mar 2017 00:17:14 -0400 Subject: SSL client certyficage In-Reply-To: <51e79eedb74a83527a47fa705e3aecf5.NginxMailingListEnglish@forum.nginx.org> References: <51e79eedb74a83527a47fa705e3aecf5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8f76368c-95ff-6c3d-442e-c4bc40280977@ohlste.in> Hello, On 3/27/17 4:28 PM, piroaa wrote: > Hi. > I have own cloud server with ssl client cert verification ssl_verify_client > set to on. How I can disable verification for location/index.php/s/ share > links ? > try setting ssl_verify_client to optional and use the built in variable "ssl_client_verify". Something like this (not tested): server { ... ssl_client_certificate /path/to/client.crt; ssl_verify_client optional; ## Unprotected part of site location ^~ /path/to/shared/links { ... } ## Protected part of site location ~ /main/site if ($ssl_client_verify != SUCCESS) { return 403; } ... } -- Jim Ohlstein "Never argue with a fool, onlookers may not be able to tell the difference." - Mark Twain From nginx-forum at forum.nginx.org Tue Mar 28 09:18:54 2017 From: nginx-forum at forum.nginx.org (freel) Date: Tue, 28 Mar 2017 05:18:54 -0400 Subject: UDP TLS Termination Message-ID: Hi guys, We are interested in UDP TLS Termination, any updates about this feature? I think I saw such topic on forum few moths ago, but I'm unable to find it now. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273251,273251#msg-273251 From vl at nginx.com Tue Mar 28 09:25:35 2017 From: vl at nginx.com (Vladimir Homutov) Date: Tue, 28 Mar 2017 12:25:35 +0300 Subject: UDP TLS Termination In-Reply-To: References: Message-ID: <20170328092534.GA32239@vlpc.nginx.com> On Tue, Mar 28, 2017 at 05:18:54AM -0400, freel wrote: > Hi guys, > > We are interested in UDP TLS Termination, any updates about this feature? I > think I saw such topic on forum few moths ago, but I'm unable to find it > now. > Can you please describe your use-case? Which applications do you use, why do you need it, etc. Please note that if we are speaking about DTLS, terminating it will mean converting datagrams into stream, and I'm not sure why anyone that has application working with stream (i.e. TCP) will want to use DTLS at some point to access it instead of normal DTLS. From vl at nginx.com Tue Mar 28 09:28:28 2017 From: vl at nginx.com (Vladimir Homutov) Date: Tue, 28 Mar 2017 12:28:28 +0300 Subject: UDP TLS Termination In-Reply-To: <20170328092534.GA32239@vlpc.nginx.com> References: <20170328092534.GA32239@vlpc.nginx.com> Message-ID: <20170328092827.GA32543@vlpc.nginx.com> On Tue, Mar 28, 2017 at 12:25:35PM +0300, Vladimir Homutov wrote: > instead of normal DTLS. i meant SSL (TLS) of course. From yar at nginx.com Tue Mar 28 12:54:43 2017 From: yar at nginx.com (Yaroslav Zhuravlev) Date: Tue, 28 Mar 2017 15:54:43 +0300 Subject: Illumos/SmartOS/Solaris issue when using eventport and keepalive at the same time In-Reply-To: <821683c1-2489-dc60-3de3-79223b8dda2c@xdrv.co.uk> References: <20170317130931.GT13617@mdounin.ru> <821683c1-2489-dc60-3de3-79223b8dda2c@xdrv.co.uk> Message-ID: <94A30159-1BF2-496A-A136-FDAB96BD0661@nginx.com> On 18 Mar 2017, at 14:14, James wrote: > On 17/03/2017 13:09, Maxim Dounin wrote: > > Hello! > >> There are problems with eventport implementation, known for at >> least several years now, and these can be easily reproduced by >> running our test suite with 'use eventport' added to test >> configurations. If I recall correctly, upstream keepalive is not >> something important to trigger problems, any connection to an >> upstream is basically enough. >> >> Unfortunately, these problems are very low priority due to >> minor popularity of Solaris. Consider using /dev/poll instead, >> which is the default, well tested and has no known problems. > > I accept all that but learnt the hard way myself. Perhaps the docs could not recommend eventport with Solaris and warn against its use for now. eg: > > http://nginx.org/en/docs/events.html > Hi James, Thanks, docs updated: http://nginx.org/en/docs/events.html#eventport > > ???????! > > > James. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- yar From nginx-forum at forum.nginx.org Tue Mar 28 16:56:13 2017 From: nginx-forum at forum.nginx.org (c4rl) Date: Tue, 28 Mar 2017 12:56:13 -0400 Subject: curl 301 moved permanently if I don't use slash at the end of the url Message-ID: Hi, I need to list the content of some directories with curl without to use a '/' at the end of the url. If I do not use the slash then I receive the message below, otherwise the content is showed. I tried many rewrite rules without success. [user at localhost ~]$ curl http://mydomain.example.com/data/foo 301 Moved Permanently

301 Moved Permanently


nginx
[user at localhost ~]$ curl http://mydomain.example.com/data/foo/ Index of /data/foo/

Index of /data/foo/


../
57581/                                            
12-Jul-2016 01:56                   -
57582/                                            
13-Jul-2016 01:55                   -
57583/                                            
14-Jul-2016 00:34                   -

This is my vhost configuration: server { listen 80; server_name mydomain.example.com; access_log /var/log/nginx/mydomain.example.com-access.log; error_log /var/log/nginx/mydomain.example.com-error.log error; location /data/foo { alias /data/foo; autoindex on; } location /data/bit { alias /data/bit; autoindex on; } Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273267,273267#msg-273267 From nginx-forum at forum.nginx.org Tue Mar 28 20:42:22 2017 From: nginx-forum at forum.nginx.org (alweiss) Date: Tue, 28 Mar 2017 16:42:22 -0400 Subject: opinions about Session tickets In-Reply-To: References: Message-ID: Hi Lukas, I plan to use a key file on a tempfs on all my servers because we want to comply with RFC5077. Each time i change the key file with a new key, is it necessary to run a "systemctl reload nginx" ? or do Something else. If reload is not necessary, would working with 3 files always called the same would be enough if i update the content with the new key ? Like move remove file3, cp file2 to file3, cp file1 to file2, generate new key in a new file1 Thanks Alex Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266070,273272#msg-273272 From luky-37 at hotmail.com Tue Mar 28 23:53:35 2017 From: luky-37 at hotmail.com (Lukas Tribus) Date: Tue, 28 Mar 2017 23:53:35 +0000 Subject: AW: RE: opinions about Session tickets In-Reply-To: References: , Message-ID: > Each time i change the key file with a new key, is it necessary to run a > "systemctl reload nginx" ? or do Something else. Yes, afaik nginx requires a reload. Haproxy can replace TLS tickets via admin socket [1] so a reload/restart is not required, I'm not aware of similar nginx functionalities (but the reload is less painless in nginx due to the master/worker concept). > If reload is not necessary, would working with 3 files always called the > same would be enough if i update the content with the new key ? > Like move remove file3, cp file2 to file3, cp file1 to file2, generate new > key in a new file1 No, that reload is necessary. Make sure you follow the advice in the doc with multiple tickets, or actually, use the following approach: ssl_session_ticket_key current.key; ssl_session_ticket_key next.key; ssl_session_ticket_key previous.key; and something like this whenever you want to replace the tickets: mv current.key previous.key mv next.key current.key "openssl rand 80 > next.key" (or rsyn to/from multiple servers) /etc/init.d/nginx reload (or whatever the latest That way, a new key will be distributed first, and only actively used for encryption on the next reload, so regardless which server the client hits, it always has an uptodate TLS ticket key, allowing decryption. cheers, lukas [1] https://cbonte.github.io/haproxy-dconv/1.7/management.html#9.3-set%20ssl%20tls-key [2] http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_ticket_key From nginx-forum at forum.nginx.org Wed Mar 29 01:18:29 2017 From: nginx-forum at forum.nginx.org (JohnCarne) Date: Tue, 28 Mar 2017 21:18:29 -0400 Subject: Memory issue Message-ID: Hi, We suspect an issue on cpanel server from last update nginx. Every overnight, there is many nginx reloads due to stat generation process : We see an ever increasing memory use of ngixn worker process, usually it says around 1-2%, we now see it cumulating without coming back to normal each overnight, we are now after 3 days at 7% memory. Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273274,273274#msg-273274 From nginx-forum at forum.nginx.org Wed Mar 29 02:18:47 2017 From: nginx-forum at forum.nginx.org (JohnCarne) Date: Tue, 28 Mar 2017 22:18:47 -0400 Subject: Memory issue In-Reply-To: References: Message-ID: <8f5f69a399cbb13f64928e655b1c6c1f.NginxMailingListEnglish@forum.nginx.org> We only user nginx as proxy on concerned server Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273274,273275#msg-273275 From nginx-forum at forum.nginx.org Wed Mar 29 08:04:24 2017 From: nginx-forum at forum.nginx.org (sv_91) Date: Wed, 29 Mar 2017 04:04:24 -0400 Subject: slow keep-alive with generic kernel Message-ID: There are 2 different versions of the program, using keep-alive First program, between the call to the connect operation and the write operation, there is a short amount of time Second program, write operation is called immediately after the connect operation. At the same time, the first program shows a rps of 2 times less than the second program. After tracing tcpdump, I can see that the server seems to "remember" the time interval between the connection and the first write, and then follows it when sending all subsequent responses. With what it can be connected? Repeats on linux kernel 4.4.0-31-generic Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273276,273276#msg-273276 From nginx-forum at forum.nginx.org Wed Mar 29 09:03:40 2017 From: nginx-forum at forum.nginx.org (zaidahmd) Date: Wed, 29 Mar 2017 05:03:40 -0400 Subject: NGINX - API Gateway - Can It work With Session Based Authentication and Upstream Applicaitons Message-ID: <75b8d05a9ddaa565828e2f4d70ed5639.NginxMailingListEnglish@forum.nginx.org> Hi Guys, I read the NGINX docs for API Gateway functionality where I can get my the users to my upstream application get authenticated by a different application. My Idea was to develop 2 applications as a proof of concept. The applications are as follows 1. Main Application : One would be an Upstream application based on Spring MVC using sessions to identify the logged in users. 2. Authentication application: It would be a simple web application with only login page and authentication functionality. I am planning to have sessions created in both the applications (Authentication, upstream). So the user sends a request to login Nginx should forward the request to Authentication applicaiton to check if the user is logged-in or authorized. Once logged in show him/her the index page, loaded from the upstream application with another session id generated by the upstream server. When the logged-in user sends a post-login request to submit a form the NGINX sends this request to authentication application to verify if the session is valid, if valid let it go to the upstream server and serve the request. This means the page on the browser can hold two sessions I want to know that is my understanding correct of how API Gateway design should be used. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273277,273277#msg-273277 From nginx-forum at forum.nginx.org Wed Mar 29 13:03:29 2017 From: nginx-forum at forum.nginx.org (crasyangel) Date: Wed, 29 Mar 2017 09:03:29 -0400 Subject: How nginx write same log file concurrently Message-ID: <6c0279faff0fb86169994c7a1fead420.NginxMailingListEnglish@forum.nginx.org> Write same fd after fork() directly, it would do the work? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273286,273286#msg-273286 From nginx-forum at forum.nginx.org Wed Mar 29 16:44:09 2017 From: nginx-forum at forum.nginx.org (nginxsantos) Date: Wed, 29 Mar 2017 12:44:09 -0400 Subject: HTTP To TCP Conversion In-Reply-To: <250638451f747492a2e999de027298bb.NginxMailingListEnglish@forum.nginx.org> References: <250638451f747492a2e999de027298bb.NginxMailingListEnglish@forum.nginx.org> Message-ID: <53963b88395836b0e4e13cd79bdd6f94.NginxMailingListEnglish@forum.nginx.org> Can someone please guide me on how this can be done. I am quite familiar with nginx code. If someone can guide me how this can be achieved (passing the incoming traffic over tcp connection to tcp clients), I can pick up... Any help from the nginx team would be appreciated. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273098,273293#msg-273293 From nginx-forum at forum.nginx.org Thu Mar 30 05:06:43 2017 From: nginx-forum at forum.nginx.org (zaidahmd) Date: Thu, 30 Mar 2017 01:06:43 -0400 Subject: NGINX - Not Calling Java Based Auth Applicaiton running in Spring Boot Message-ID: Hi, I am implementing nginx as API gateway. I have two applications written in spring-boot-web with angularjs. One application is which is auth application and has login.html page inside it. The othe is my upstream applicaiton. My issue is when I comment/disable the /auth part of the below config, my applicaiton becomes accessible via nginx. But, when I enable the auth config in nginx, i start getting http 500 error. I am not getting detailed logs in NGINX so don't know how to send more details in the forum. Following is my nginx.conf. http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 8180; # web user traffic service location /adi{ auth_request /auth; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header Host $http_host; proxy_pass http://adi-backend; } location =/auth{ internal; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header Host $http_host; proxy_pass http://127.0.0.1:8080/; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } error_page 401 = @error401; error_page 404 = @error401; location @error401 { return 302 http://adi-backend; } } upstream adi-backend { server localhost:8080; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273294,273294#msg-273294 From nginx-forum at forum.nginx.org Thu Mar 30 06:59:50 2017 From: nginx-forum at forum.nginx.org (shivramg94) Date: Thu, 30 Mar 2017 02:59:50 -0400 Subject: Nginx upstream server certificate verification Message-ID: <3eb066e5b9eabf8c160c21e14c36e29f.NginxMailingListEnglish@forum.nginx.org> I am trying to implement HTTPS protocol communication at every layer of a proxying path. My proxying path is from client to load balancer (nginx) and then from nginx to the upstream server. I am facing a problem when the request is proxied from nginx to the upstream server. I am getting the following error in the nginx logs 2017/03/26 19:08:39 [error] 76753#0: *140 upstream SSL certificate does not match "8ba0c0da44ee43ea894987ab01cf4fbc" while SSL handshaking to upstream, client: 10.191.200.230, server: abc.uscom-central-1.ssenv.opcdev2.oraclecorp.com, request: "GET /a/a.html HTTP/1.1", upstream: "https://10.240.81.28:8001/a/a.html", host: "abc.uscom-central-1.ssenv.opcdev2.oraclecorp.com:10003" This is my configuration for the upstream server block upstream 8ba0c0da44ee43ea894987ab01cf4fbc { server slc01etc.us.oracle.com:8001 weight=1; keepalive 100; } proxy_pass https://8ba0c0da44ee43ea894987ab01cf4fbc; proxy_set_header Host $host:10003; proxy_set_header WL-Proxy-SSL true; proxy_set_header IS_SSL ssl; proxy_ssl_trusted_certificate /u01/data/secure_artifacts/ssl/trusted_certs/trusted-cert.pem; proxy_ssl_verify on;proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; When the request goes from Nginx to the upstream server, nginx matches the upstream ssl certificate against the pattern present in the proxy_pass directive. But my upstream ssl certificate pattern is the upstream server hostname (slc01etc.us.oracle.com) . Is there any way, where I can force Nginx to verify the upstream ssl certificate against the server hostnames provided in the upstream server block, instead of the pattern present in the proxy_pass directive? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273295,273295#msg-273295 From mail at kilian-ries.de Thu Mar 30 08:27:15 2017 From: mail at kilian-ries.de (Kilian Ries) Date: Thu, 30 Mar 2017 08:27:15 +0000 Subject: proxy_protocol not accepting TCP connection Message-ID: Hi, i configured my nginx with proxy_protocol, ssl and http2: server { listen 443 ssl http2 proxy_protocol; ? } Now nginx is accepting proxy_protocol connections but not normal TCP connections. Is there any change to let nginx accept both connection types? I have a situation where are two different loadbalancers in front of nginx: - Haproxy with proxy_protocol - pfsense with simple tcp loadbalancer As it seems nginx is only capable of accepting one connection type at once. It would be great if nginx can accept normal tcp connections and proxy_protocol at the same time. Thanks Greets Kilian -------------- next part -------------- An HTML attachment was scrubbed... URL: From pluknet at nginx.com Thu Mar 30 12:09:40 2017 From: pluknet at nginx.com (Sergey Kandaurov) Date: Thu, 30 Mar 2017 15:09:40 +0300 Subject: Nginx upstream server certificate verification In-Reply-To: <3eb066e5b9eabf8c160c21e14c36e29f.NginxMailingListEnglish@forum.nginx.org> References: <3eb066e5b9eabf8c160c21e14c36e29f.NginxMailingListEnglish@forum.nginx.org> Message-ID: > On 30 Mar 2017, at 09:59, shivramg94 wrote: > > I am trying to implement HTTPS protocol communication at every layer of a > proxying path. My proxying path is from client to load balancer (nginx) and > then from nginx to the upstream server. > > I am facing a problem when the request is proxied from nginx to the upstream > server. > > I am getting the following error in the nginx logs > > 2017/03/26 19:08:39 [error] 76753#0: *140 upstream SSL certificate does not > match "8ba0c0da44ee43ea894987ab01cf4fbc" while SSL handshaking to upstream, > client: 10.191.200.230, server: > abc.uscom-central-1.ssenv.opcdev2.oraclecorp.com, request: "GET /a/a.html > HTTP/1.1", upstream: "https://10.240.81.28:8001/a/a.html", host: > "abc.uscom-central-1.ssenv.opcdev2.oraclecorp.com:10003" > > This is my configuration for the upstream server block > > upstream 8ba0c0da44ee43ea894987ab01cf4fbc { > server slc01etc.us.oracle.com:8001 weight=1; > keepalive 100; > } > > proxy_pass https://8ba0c0da44ee43ea894987ab01cf4fbc; > proxy_set_header Host $host:10003; > proxy_set_header WL-Proxy-SSL true; > proxy_set_header IS_SSL ssl; > proxy_ssl_trusted_certificate > /u01/data/secure_artifacts/ssl/trusted_certs/trusted-cert.pem; > proxy_ssl_verify on;proxy_set_header X-Forwarded-For > $proxy_add_x_forwarded_for; > > When the request goes from Nginx to the upstream server, nginx matches the > upstream ssl certificate against the pattern present in the proxy_pass > directive. But my upstream ssl certificate pattern is the upstream server > hostname (slc01etc.us.oracle.com) . > > Is there any way, where I can force Nginx to verify the upstream ssl > certificate against the server hostnames provided in the upstream server > block, instead of the pattern present in the proxy_pass directive? Use the proxy_ssl_name directive to override. See for more details: http://nginx.org/r/proxy_ssl_name -- Sergey Kandaurov From mdounin at mdounin.ru Thu Mar 30 13:15:53 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 30 Mar 2017 16:15:53 +0300 Subject: proxy_protocol not accepting TCP connection In-Reply-To: References: Message-ID: <20170330131553.GE13617@mdounin.ru> Hello! On Thu, Mar 30, 2017 at 08:27:15AM +0000, Kilian Ries wrote: > Hi, > > i configured my nginx with proxy_protocol, ssl and http2: > > server { > listen 443 ssl http2 proxy_protocol; > > ? > } > > Now nginx is accepting proxy_protocol connections but not normal > TCP connections. Is there any change to let nginx accept both > connection types? I have a situation where are two different > loadbalancers in front of nginx: > > > - Haproxy with proxy_protocol > > - pfsense with simple tcp loadbalancer > > As it seems nginx is only capable of accepting one connection > type at once. It would be great if nginx can accept normal tcp > connections and proxy_protocol at the same time. Consider configured different listening sockets with distinct settings, for example, on different ports. -- Maxim Dounin http://nginx.org/ From vukomir at ianculov.ro Thu Mar 30 15:28:16 2017 From: vukomir at ianculov.ro (Vucomir Ianculov) Date: Thu, 30 Mar 2017 17:28:16 +0200 (CEST) Subject: fastcgi_pass and http upstream In-Reply-To: References: <1839306171.950.1490211424022.JavaMail.vukomir@DESKTOP-9I7P6HN> <946720374.959.1490212018452.JavaMail.vukomir@DESKTOP-9I7P6HN> Message-ID: <472601813.102.1490887692149.JavaMail.vukomir@DESKTOP-9I7P6HN> Hi i have search on the documentation but i was not able to find it, can you please give me an example on who it's done? Thanks. ----- Original Message ----- From: "Yuriy Medvedev" To: nginx at nginx.org Cc: "Vucomir Ianculov" Sent: Wednesday, March 22, 2017 9:57:40 PM Subject: Re: fastcgi_pass and http upstream Yes, it is possible. Please read the documentation. 22 ????? 2017 ?. 22:47 ???????????? "Vucomir Ianculov via nginx" < nginx at nginx.org > ???????: corrected question i have a situation where i have 2 Apache backed server and 2 php-fpm backed server. i would like to setup up a upstream to include all 4 back-ends or somehow to balance the requestion on all 4 back-ends back-ends. is it possible? Thanks. Br, Vuko From: "Vucomir Ianculov via nginx" < nginx at nginx.org > To: "nginx" < nginx at nginx.org > Cc: "Vucomir Ianculov" < vukomir at ianculov.ro > Sent: Wednesday, March 22, 2017 9:37:04 PM Subject: fastcgi_pass and http upstream Hi, is is possible to have a upstream that contains point to http and phpfpm? Kind Regards, Vucomir Ianculov E-Mail: vukomir at ianculov.ro Phone: (+40) 722 - 690 - 514 View Vucomir Ianculov's profile on LinkedIn Vucomir Ianculov _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Mar 30 22:13:59 2017 From: nginx-forum at forum.nginx.org (CeeGeeDev) Date: Thu, 30 Mar 2017 18:13:59 -0400 Subject: how to proxy a proxy (subrequest with corporate proxy) Message-ID: Greetings, Our custom nginx module implements a number of subrequests (REST calls to other servers to obtain data for our business logic). Everything is working correctly, except one customer requires a corporate HTTP web proxy for the URL (running on a different server) that our subrequests will be hitting. It's not clear to us how to configure a "web proxy" for a subrequest, since the subrequest itself is already basically a "proxy" call. Our subrequests would look like this: location /subrequest { internal; resolver 127.0.0.1; proxy_pass http://rest_server/...; } We're aware of this kind of thing for the main request... but unclear if/how it applies to subrequests? http { upstream corporate_proxy { server proxy.my.company.net:8080; } server { ... location /custom/main/request/url { proxy_buffering off; proxy_pass_header on; proxy_set_header Host "www.origin-server.com"; proxy_pass http://corporate_proxy; } } } How would we force the subrequest to use the corporate proxy? Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273300,273300#msg-273300 From nginx-forum at forum.nginx.org Thu Mar 30 23:35:48 2017 From: nginx-forum at forum.nginx.org (3achour) Date: Thu, 30 Mar 2017 19:35:48 -0400 Subject: proxy pass issue Message-ID: <8c02b442f5336d860859258fc838199c.NginxMailingListEnglish@forum.nginx.org> hi dear, pls help im really stuck. heres my situation theres a website when i copy as curl from chrome i got this: curl 'https://cdn.livetvhd.net/proxy/http://38.99.146.36:7777/SpaceToonArabic_HD/SpaceToonArabic_High/19172.ts?user=sgls-1&session=3a39537463f8621b199d049b0a85a34ed7f4c88ee89d807565c7e1474bef956a2915608b768e296e2aa44bc17a7df0c3&hlsid=HTTP_ID_405343&group_id=-1&starttime=20170328T181346.532012&start_time=1490916595000&end_time=1490916602916' -H 'Cookie: __cfduid=dabb6eac3c61b6e397c76222aa609fc7f1490509096; _ga=GA1.2.522781785.1490509099; _gat=1' -H 'Accept-Encoding: gzip, deflate, sdch, br' -H 'Accept-Language: en-US,en;q=0.8' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36' -H 'Accept: */*' -H 'Referer: https://livetvhd.net/video/jeunesse/space-toon-live-streaming-en-direct' -H 'X-Requested-With: ShockwaveFlash/25.0.0.127' -H 'Connection: keep-alive' --compressed ps: the link expire after couple of sec. so if i play it with ffplay or vlc directly like this : ffplay "https://cdn.livetvhd.net/proxy/http://38.99.146.36:7777/SpaceToonArabic_HD/SpaceToonArabic_High/19172.ts?user=sgls-1&session=3a39537463f8621b199d049b0a85a34ed7f4c88ee89d807565c7e1474bef956a2915608b768e296e2aa44bc17a7df0c3&hlsid=HTTP_ID_405343&group_id=-1&starttime=20170328T181346.532012&start_time=1490916595000&end_time=1490916602916" it doesnt work: Server returned 5XX Server Error reply but if i pipe it to ffplay from curl, it works : heres an example: curl 'https://cdn.livetvhd.net/proxy/http://38.99.146.36:7777/SpaceToonArabic_HD/SpaceToonArabic_High/19172.ts?user=sgls-1&session=3a39537463f8621b199d049b0a85a34ed7f4c88ee89d807565c7e1474bef956a2915608b768e296e2aa44bc17a7df0c3&hlsid=HTTP_ID_405343&group_id=-1&starttime=20170328T181346.532012&start_time=1490916595000&end_time=1490916602916' -H 'Cookie: __cfduid=dabb6eac3c61b6e397c76222aa609fc7f1490509096; _ga=GA1.2.522781785.1490509099; _gat=1' -H 'Accept-Encoding: gzip, deflate, sdch, br' -H 'Accept-Language: en-US,en;q=0.8' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36' -H 'Accept: */*' -H 'Referer: https://livetvhd.net/video/jeunesse/space-toon-live-streaming-en-direct' -H 'X-Requested-With: ShockwaveFlash/25.0.0.127' -H 'Connection: keep-alive' --compressed | ffplay - now i was trying to use proxy pass so i can achieve this: http://localhost/cdn/SpaceToonArabic_HD/SpaceToonArabic_High/19172.ts?user=sgls-1&session=3a39537463f8621b199d049b0a85a34ed7f4c88ee89d807565c7e1474bef956a2915608b768e296e2aa44bc17a7df0c3&hlsid=HTTP_ID_405343&group_id=-1&starttime=20170328T181346.532012&start_time=1490916595000&end_time=1490916602916 but i got error : 404 Not Found heres my config for proxy : location /cdn { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For 163.172.151.160:443; proxy_set_header "User-Agent" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.98 Safari/537.36"; proxy_set_header X-Custom-Referrer "https://livetvhd.net/video/jeunesse/space-toon-live-streaming-en-direct"; proxy_set_header Host cdn.livetvhd.net; add_header Pragma "no-cache"; proxy_pass_header Set-Cookie; proxy_pass https://cdn.livetvhd.net/proxy/http://38.99.146.38:7777/; proxy_cookie_domain 'livetvhd.net' 'cdn.livetvhd.net'; #add_header Access-Control-Allow-Headers: Authorization, Lang; add_header Access-Control-Expose-Headers 'Server,range,Content-Length,Content-Range'; add_header Access-Control-Allow-Methods 'GET, POST, HEAD, DELETE'; add_header Access-Control-Allow-Origin '*'; proxy_ssl_session_reuse off; proxy_redirect off; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273301,273301#msg-273301 From alex at samad.com.au Thu Mar 30 23:36:34 2017 From: alex at samad.com.au (Alex Samad) Date: Fri, 31 Mar 2017 10:36:34 +1100 Subject: best practise with lua files Message-ID: Hi I have started to use lua file for some dynamic stuff. Whats the best practice to secure them How do I stop them from being downloaded location ~ \.lua$ { send error back } is it best to place all of them into a different directory that isn't under a root ? A -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Mar 31 06:04:30 2017 From: nginx-forum at forum.nginx.org (ulissestoga) Date: Fri, 31 Mar 2017 02:04:30 -0400 Subject: Speed up file upload via nginx post Message-ID: <0c8bfbec9ef6caf1e5d14dbfc3c3056d.NginxMailingListEnglish@forum.nginx.org> I'm using nginx in amazon ec2, I use a laravel system for loading images. Both KB and MB images take a long time to upload. I would like a help to implement in nginx some module that leaves the loading faster. I use nginx 1.10.2. Thank you Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273302,273302#msg-273302 From steven.hartland at multiplay.co.uk Fri Mar 31 11:52:16 2017 From: steven.hartland at multiplay.co.uk (Steven Hartland) Date: Fri, 31 Mar 2017 12:52:16 +0100 Subject: nginx upgrade fails due bind error on 127.0.0.1 in a FreeBSD jail In-Reply-To: <20161205171206.GJ18639@mdounin.ru> References: <7e30ad76-e1cf-9fdb-f4e4-114ffbe62a3e@multiplay.co.uk> <20161205132706.GF18639@mdounin.ru> <20161205171206.GJ18639@mdounin.ru> Message-ID: <828c6ecf-1557-107a-a73c-0f80ce6e0c51@multiplay.co.uk> On 05/12/2016 17:12, Maxim Dounin wrote: > Hello! > > On Mon, Dec 05, 2016 at 02:40:27PM +0000, Steven Hartland wrote: > >> On 05/12/2016 13:27, Maxim Dounin wrote: >>> Hello! >>> >>> On Sun, Dec 04, 2016 at 09:39:59PM +0000, Steven Hartland wrote: > [...] > >>>> I believe the change to add a localhost bind to the server in question >>>> was relatively recent so I suspect it has something to do with that. >>>> >>>> The config for this is simply: >>>> server { >>>> listen 127.0.0.1:81; >>>> server_name localhost; >>>> >>>> location /status { >>>> stub_status; >>>> } >>>> } >>>> >>>> The upgrade in this case was: >>>> nginx: 1.10.1_1,2 -> 1.10.2_2,2 >>>> >>>> Now this server is running under FreeBSD in a jail (10.2-RELEASE) and it >>>> has 127.0.0.1 available yet it seems nginx has incorrectly bound the >>>> address: >>>> netstat -na | grep LIST | grep 81 >>>> tcp4 0 0 10.10.96.146.81 *.* LISTEN >>> In a FreeBSD jail with a single IP address any listening address >>> is implicitly converted to the jail address. As a result, if you >>> write in config "127.0.0.1" - upgrade won't work, as it will see >>> inherited socket listening on the jail address (10.10.96.146 in >>> your case) and will try to create a new listening socket with the >>> address from the configuration and this will fail. >> Thanks for the response Maxim. >> >> In our case we don't have a single IP in the jail we have 4 addresses: >> 1 x localhost address (127.0.0.1) >> 2 x external >> 1 x private address (10.10.96.146) >> >> We have a number of binds the externals are just port binds the internal >> a localhost e.g. >> listen 443 default_server accept_filter=httpready ssl; >> listen 80 default_server accept_filter=httpready; >> ... >> listen 80; >> listen 443 ssl; >> ... >> listen 127.0.0.1:81; >> >> We're expecting the none IP specified listens to bind to * (this is what >> happens) >> >> But the "listen 127.0.0.1:81" results in "10.10.96.146:81" instead. >> >> Given your description I would only expect this 127.0.0.1 wasn't present >> in the jail and 10.10.96.146 was the only IP available. >> >> Did I miss-understand your description? > Given that the real local address of the listening socket as shown > by netstat is 10.10.96.146, it means that the socket was created > when there were no explicit 127.0.0.1 in the jail. > > And, given that you are able to connect to it via "lwp-request > http://127.0.0.1:81/status", it looks like that 127.0.0.1 is still > not in the jail, but mapped to 10.10.96.146 instead. > > Note that the fact that you can use 127.0.0.1 in a jail doesn't > mean that it is a real address available. Normally, 127.0.0.1 > will be implicitly converted to the main IP of the jail, and most > utilities won't notice. > > (Note well that since there is no real 127.0.0.1 in the jail, it > doesn't provide any additional isolation compared to the jail IP > address. That is, a service which is listening on 127.0.0.1 is in > fact listening on 10.10.96.146, and it is reachable from anywhere, > not just the jail itself.) > >>> There are two possible solutions for this problem: >>> >>> - configure listening on the jail IP address to avoid this >>> implicit conversion; >> As above I'm not sure I follow you correctly as 127.0.0.1 is one of the >> IP's available in the jail. > See above, looks like it's not, and it is implicitly converted to > 10.10.96.146 instead. > Sorry its taken a while, but for those that are interested I committed the fixes for localhost binding in FreeBSD jails today: https://svnweb.freebsd.org/changeset/base/316313 - IPv4 fix https://svnweb.freebsd.org/changeset/base/316328 - IPv6 fix With these fixes "nginx upgrade" will work for configurations which use explicit localhost bindings e.g. (127.0.0.1) if said IP is also added to the jail. Regards Steve -------------- next part -------------- An HTML attachment was scrubbed... URL: