From mdounin at mdounin.ru Wed Nov 1 12:32:39 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 1 Nov 2017 15:32:39 +0300 Subject: [Module] ngx_http_gzip issue : unknown directive "gzip" In-Reply-To: <4952394.jikkrLoDxI@vbart-workstation> References: <4952394.jikkrLoDxI@vbart-workstation> Message-ID: <20171101123239.GR26836@mdounin.ru> Hello! On Tue, Oct 31, 2017 at 04:51:31PM +0300, Valentin V. Bartenev wrote: > On Monday 30 October 2017 10:33:55 nik mur wrote: > > Hi, > > > > Recently I upgraded my nginx to 1.12 version from 1.10 branch. > > > > The build from source went through without any issues, but while starting > > Nginx I am receiving this error: > > > > *>>[emerg] 342#342: unknown directive "gzip" in > > /usr/local/apps/nginx/etc/conf.d/gzip.conf:2* > > > [..] > > You should check your full configuration. It's unclear where this "gzip" > directive is included. > > Please note, there's no such directive in mail and stream modules. The message suggests that the directive is indeed unknown, not just used in a wrong context. When a directive is used in a wrong context, the message would be "is not allowed here" instead. In this particular case I would recommend to compile without any 3rd party modules, likely one of the 3rd party modules screwed up building due to incorrect hacks used in config script. -- Maxim Dounin http://mdounin.ru/ From arie at nginx.com Wed Nov 1 21:55:27 2017 From: arie at nginx.com (Arie Luttikhuizen) Date: Wed, 1 Nov 2017 14:55:27 -0700 Subject: Announcing crossplane Message-ID: Hey, We on the NGINX Amplify team made an NGINX configuration parser tool. It's named "crossplane" and it's distributed as a Python package. It's under the Apache 2.0 License and it lives here: https://github.com/nginxinc/crossplane. Feel free to check it out, mess around with it, and to use it in your projects. Also, we'd be happy to hear your feedback if you have any. Thanks, Arie -- Arie van Luttikhuizen SaaS Developer San Francisco, CA -------------- next part -------------- An HTML attachment was scrubbed... URL: From yuanm12 at 163.com Thu Nov 2 05:09:03 2017 From: yuanm12 at 163.com (=?GBK?B?sLK48Q==?=) Date: Thu, 2 Nov 2017 13:09:03 +0800 (CST) Subject: Performance issue of "ngx_http_mirror_module" In-Reply-To: <94D235B4-844E-4724-BFC8-DE422696878C@me.com> References: <24d0db6e.8cb1.15f5788c5be.Coremail.yuanm12@163.com> <20171026122213.GA75960@Romans-MacBook-Air.local> <591fff42.25c1.15f5b6dc6ca.Coremail.yuanm12@163.com> <94D235B4-844E-4724-BFC8-DE422696878C@me.com> Message-ID: <7c275be5.59c5.15f7b21f4e3.Coremail.yuanm12@163.com> Dear Peter, Pleaes check my comments below. thanks -- Regards, Yuan Man Trouble is a Friend. At 2017-10-27 22:02:21, "Peter Booth" wrote: There are a few approaches to this but they depend upon what you?re trying to achieve. Are your requests POSTs or GETs? Why do you have the mirroring configured? If the root cause is that your mirror site cannot support the same workload as your primary site, what do you want to happen when your mirror site is overloaded? One approach, using nginx, is to use rate limiting and connection limiting on your mirror server. This is described on the nginx website as part of the ddos mitigation section. Or, if your bursts of activity are typically for the same resource then you can use caching with the proxy_cache_use_stale directive. Another approach could be to use lua / openresty to implement a work- shedding interceptor (within nginx) that sits in front of your slow web server. Within lua you would need code that guesses whether or not your web server is overloaded and, if it is, it simply returns a 503 and doesn?t forward the request. Sent from my iPhone On Oct 27, 2017, at 2:24 AM, ?? wrote: Dear Roman, Thanks for your valuable response. So ,is that means, If we optimize the parameter of the keep-alive to avoid keep-alive connection ? then we can avoid this kind of performance issue ? Or if the mirror subrequest is slow than original subrequest, then we can't avoid this kind of performance issue ? Thanks in advance. -- Regards, Yuan Man Trouble is a Friend. At 2017-10-26 20:22:13, "Roman Arutyunyan" wrote: >Hi, > >On Thu, Oct 26, 2017 at 03:15:02PM +0800, ?? wrote: >> Dear All, >> >> >> I have faced a issue with Nginx "ngx_http_mirror_module" mirror function. want to discuss with you about this. >> >> >> The situation is like below: >> While I try to copy the request form original to mirror side, if the original application can process 600 request per seconds, but the mirror environment can only process 100 requests per seconds. Normally, even the mirror environment can't process all the request in time. it's should not impact the nginx forward the request to the original environment. But we observed if the mirror environment can't process all the request, then the Nginx will fall in issue , and original environment can't feedback process result to client in time. Then from client side, it seems the Nginx is down. If you have faced same issue before ? any suggestion ? > >A mirror request is executed in parallel with the main request and does not >directly affect the main request execution time. However, if you send another >request on the same client connection, it will not be processed until the >previous request and all its subrequest (including mirror subrequests) finish. >So if you use keep-alive client connections and your mirror subrequests are >slow, you may experience some performance issues. > >-- >Roman Arutyunyan >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx ??????????????????????13?/??????75?3?>> _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nikhil6018 at gmail.com Thu Nov 2 08:30:52 2017 From: nikhil6018 at gmail.com (nik mur) Date: Thu, 02 Nov 2017 08:30:52 +0000 Subject: [Module] ngx_http_gzip issue : unknown directive "gzip" In-Reply-To: <20171101123239.GR26836@mdounin.ru> References: <4952394.jikkrLoDxI@vbart-workstation> <20171101123239.GR26836@mdounin.ru> Message-ID: Hi Maxim, Thanks for the Tip. It seems that PageSpeed Module was the culprit, after compiling without PageSpeed, GZIP worked perfectly. On Wed, Nov 1, 2017 at 6:03 PM Maxim Dounin wrote: > Hello! > > On Tue, Oct 31, 2017 at 04:51:31PM +0300, Valentin V. Bartenev wrote: > > > On Monday 30 October 2017 10:33:55 nik mur wrote: > > > Hi, > > > > > > Recently I upgraded my nginx to 1.12 version from 1.10 branch. > > > > > > The build from source went through without any issues, but while > starting > > > Nginx I am receiving this error: > > > > > > *>>[emerg] 342#342: unknown directive "gzip" in > > > /usr/local/apps/nginx/etc/conf.d/gzip.conf:2* > > > > > [..] > > > > You should check your full configuration. It's unclear where this "gzip" > > directive is included. > > > > Please note, there's no such directive in mail and stream modules. > > The message suggests that the directive is indeed unknown, not > just used in a wrong context. When a directive is used in a wrong > context, the message would be "is not allowed here" instead. > > In this particular case I would recommend to compile without any > 3rd party modules, likely one of the 3rd party modules screwed up > building due to incorrect hacks used in config script. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Nov 2 08:37:35 2017 From: nginx-forum at forum.nginx.org (pavelvd) Date: Thu, 02 Nov 2017 04:37:35 -0400 Subject: OCSP validation of client certificates In-Reply-To: <20140103031727.GD95113@mdounin.ru> References: <20140103031727.GD95113@mdounin.ru> Message-ID: Have you any plans for support OCSP validation of client certificates ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,238506,277170#msg-277170 From ru at nginx.com Thu Nov 2 10:04:48 2017 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 2 Nov 2017 13:04:48 +0300 Subject: Resolver not re-resolving new ip address of an AW ELB In-Reply-To: <8ac79f05aa3424d54f5333ca2c0acf2e.NginxMailingListEnglish@forum.nginx.org> References: <8ac79f05aa3424d54f5333ca2c0acf2e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20171102100448.GE56095@lo0.su> On Tue, Oct 31, 2017 at 01:26:45PM -0400, RKGood wrote: > Thank you for your replies. I think we have found the root cause. We have > found below: > > When you are using variables in a proxy_pass directive, nginx will use > runtime resolving except if : > the target server is declared as an IP address There's nothing to resolve if it's an address. > the target server name is part of an upstream server group Not part, but the upstream group name. This is documented in the http://nginx.org/r/proxy_pass directive as follows: : Parameter value can contain variables. In this case, if an address : is specified as a domain name, the name is searched among the : described server groups, and, if not found, is determined using : a resolver. > the target server name has already been resolved (e.g. it matches a > server name in another server block) nginx has its own resolver cache. By default, nginx uses DNS TTL. If that's not desirable for some reasons, you can control the validity of cached entries by using the "valid" parameter of the "resolver" directive, please see http://nginx.org/r/resolver for details. From nginx-forum at forum.nginx.org Thu Nov 2 14:00:34 2017 From: nginx-forum at forum.nginx.org (FrenchFry) Date: Thu, 02 Nov 2017 10:00:34 -0400 Subject: Beginner question - TCP or Sockets ? Message-ID: I'm new to Nginx and the technology in general. I have spent some time on nginx.org. The one thing I'm not sure of is if I need to configure my web server, which is Thin (ruby), with sockets or should TCP be enough. The reason I ask is because a number of tutorial and blog posts seem to say that the two servers can communicate via tcp but at the same time most setups seem to be done with sockets. I'm looking for Nginx to serve static files, specifically media files, but not anything that resides in my site's public folder. Anyway, to repeat, do I need sockets set up or should tcp be enough. So far tcp has not worked for me. Thanks for any advice , direction. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277173,277173#msg-277173 From ru at nginx.com Thu Nov 2 15:43:05 2017 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 2 Nov 2017 18:43:05 +0300 Subject: Beginner question - TCP or Sockets ? In-Reply-To: References: Message-ID: <20171102154305.GI56095@lo0.su> On Thu, Nov 02, 2017 at 10:00:34AM -0400, FrenchFry wrote: > I'm new to Nginx and the technology in general. I have spent some time on > nginx.org. The one thing I'm not sure of is if I need to configure my web > server, which is Thin (ruby), with sockets or should TCP be enough. The > reason I ask is because a number of tutorial and blog posts seem to say that > the two servers can communicate via tcp but at the same time most setups > seem to be done with sockets. > > I'm looking for Nginx to serve static files, specifically media files, but > not anything that resides in my site's public folder. Anyway, to repeat, do > I need sockets set up or should tcp be enough. So far tcp has not worked for > me. Thanks for any advice , direction. https://serverfault.com/questions/124517/whats-the-difference-between-unix-socket-and-tcp-ip-socket In short: any way would do, when properly configured. From neuronetv at gmail.com Thu Nov 2 16:29:04 2017 From: neuronetv at gmail.com (Anthony Griffiths) Date: Thu, 2 Nov 2017 16:29:04 +0000 Subject: streaming video to mobile phones Message-ID: I'm running nginx-1.12.2 on a centos 6 server and I've had success with it streaming live video to a mobile phone. I followed the instructions here: https://www.vultr.com/docs/setup-nginx-on-ubuntu-to-stream-live-hls-video this page uses mpegurl m3u8 as the streaming protocol. My question is are there any other protocols that work with mobile phones? And how would I specify that in nginx.conf? I've tried webm to ffserver but only got video on a computer. Thanks for any advice. ps: I've left the live stream running and I'd appreciate feedback on whether your phone can play the video (or not) at: http://198.91.92.112/hls.html. Please let me know if it plays for you or not and what kind of phone you have. Thanks again. From nginx-forum at forum.nginx.org Thu Nov 2 16:56:46 2017 From: nginx-forum at forum.nginx.org (FrenchFry) Date: Thu, 02 Nov 2017 12:56:46 -0400 Subject: Beginner question - TCP or Sockets ? In-Reply-To: <20171102154305.GI56095@lo0.su> References: <20171102154305.GI56095@lo0.su> Message-ID: Ok, then I know I'm on the right track. Thanks for the link. It must be something in my conf that is tripping me up. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277173,277176#msg-277176 From arut at nginx.com Thu Nov 2 18:52:23 2017 From: arut at nginx.com (Roman Arutyunyan) Date: Thu, 2 Nov 2017 21:52:23 +0300 Subject: streaming video to mobile phones In-Reply-To: References: Message-ID: <20171102185223.GG581@Romans-MacBook-Air.local> Hi Anthony, On Thu, Nov 02, 2017 at 04:29:04PM +0000, Anthony Griffiths wrote: > I'm running nginx-1.12.2 on a centos 6 server and I've had success > with it streaming live video to a mobile phone. I followed the > instructions here: > https://www.vultr.com/docs/setup-nginx-on-ubuntu-to-stream-live-hls-video > this page uses mpegurl m3u8 as the streaming protocol. My question is > are there any other protocols that work with mobile phones? Any protocol can work with mobile phones once you get the right application to play it. HLS (which you called "megurl m3u8") is widely adopted on mobile platforms, so it's a good choice. > And how > would I specify that in nginx.conf? I've tried webm to ffserver but > only got video on a computer. Thanks for any advice. > ps: I've left the live stream running and I'd appreciate feedback on > whether your phone can play the video (or not) at: > http://198.91.92.112/hls.html. Please let me know if it plays for you > or not and what kind of phone you have. Thanks again. -- Roman Arutyunyan From neuronetv at gmail.com Thu Nov 2 19:01:16 2017 From: neuronetv at gmail.com (Anthony Griffiths) Date: Thu, 2 Nov 2017 19:01:16 +0000 Subject: streaming video to mobile phones In-Reply-To: <20171102185223.GG581@Romans-MacBook-Air.local> References: <20171102185223.GG581@Romans-MacBook-Air.local> Message-ID: On Thu, Nov 2, 2017 at 6:52 PM, Roman Arutyunyan wrote: > Hi Anthony, > > On Thu, Nov 02, 2017 at 04:29:04PM +0000, Anthony Griffiths wrote: >> I'm running nginx-1.12.2 on a centos 6 server and I've had success >> with it streaming live video to a mobile phone. I followed the >> instructions here: >> https://www.vultr.com/docs/setup-nginx-on-ubuntu-to-stream-live-hls-video >> this page uses mpegurl m3u8 as the streaming protocol. My question is >> are there any other protocols that work with mobile phones? > > Any protocol can work with mobile phones once you get the right application > to play it. HLS (which you called "megurl m3u8") is widely adopted on mobile > platforms, so it's a good choice. thanks but there are several applications in a chain to stream video. Mine is: video source --> ffmpeg --> broadcast server (nginx) --> web player (flowplayer embedded in html5) which part of the chain would you be referring to for the right application?? From medvedev.yp at gmail.com Thu Nov 2 19:19:05 2017 From: medvedev.yp at gmail.com (Iurii Medvedev) Date: Thu, 2 Nov 2017 22:19:05 +0300 Subject: streaming video to mobile phones In-Reply-To: References: <20171102185223.GG581@Romans-MacBook-Air.local> Message-ID: You can use nginx as video streaming server. Its mean you can push or pull to/from nginx. Create hls playlist and your player can stream video. On Nov 2, 2017 22:01, "Anthony Griffiths" wrote: > On Thu, Nov 2, 2017 at 6:52 PM, Roman Arutyunyan wrote: > > Hi Anthony, > > > > On Thu, Nov 02, 2017 at 04:29:04PM +0000, Anthony Griffiths wrote: > >> I'm running nginx-1.12.2 on a centos 6 server and I've had success > >> with it streaming live video to a mobile phone. I followed the > >> instructions here: > >> https://www.vultr.com/docs/setup-nginx-on-ubuntu-to- > stream-live-hls-video > >> this page uses mpegurl m3u8 as the streaming protocol. My question is > >> are there any other protocols that work with mobile phones? > > > > Any protocol can work with mobile phones once you get the right > application > > to play it. HLS (which you called "megurl m3u8") is widely adopted on > mobile > > platforms, so it's a good choice. > > thanks but there are several applications in a chain to stream video. Mine > is: > video source --> ffmpeg --> broadcast server (nginx) --> web player > (flowplayer embedded in html5) > which part of the chain would you be referring to for the right > application?? > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Nov 2 21:01:59 2017 From: nginx-forum at forum.nginx.org (FrenchFry) Date: Thu, 02 Nov 2017 17:01:59 -0400 Subject: Possibly missing something in my conf file? Message-ID: <6efc9ca234e45c879c3041a0f3802937.NginxMailingListEnglish@forum.nginx.org> I'm a bit confused about what it is I need to get Nginx to serve requests from my app server. I'm running inside a Vagrant box Ubuntu. Nginx runs on guest 80 and the server can be acccessed on 8080, while my app server (Thin) runs on guest 8000 and host 4567. Both are either localhost or 127.0.0.1. Problem is I am not seeing anything going on between the two servers. I can see my conf file played out if I go to localhost:8080, but nothing with port 4567. The reverse-proxy description doesn't sound like something I need since I have no upstream servers. All I want to do is serve up certain files that exist outside of the site directory / public from another directory on my machine. TCP sockets is what I'm trying but I must be missing something. This is what I have so far in my conf file user vagrant; worker_processes 1; error_log /var/log/nginx/error.log debug; pid /run/nginx.pid; events { worker_connections 1024; } http { log_format main '$remote_addr - $remote_user[$time_local] $status ' '"$request" $body_bytes_sent "http_referer" ' '"$http_user_agent" "http_x_forwarded_for"'; include /etc/nginx/mime.types; server { listen 80; server_name 127.0.0.1:8000; rewrite_log on; access_log /var/log/nginx/access.log; location /audio/ { root /; sendfile on; autoindex on; tcp_nodelay on; keepalive_timeout 65; } } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277180,277180#msg-277180 From shanchuan04 at gmail.com Fri Nov 3 07:15:33 2017 From: shanchuan04 at gmail.com (yang chen) Date: Fri, 3 Nov 2017 15:15:33 +0800 Subject: why delta only include the execution time of ngx_process_events not ngx_event_process_posted (Zhang Chao) (Valentin V. Bartenev) Message-ID: Thank you very much, but I have another question, if delta larger than 1ms, it will excute ngx_event_expire_timers, how do you get the value? why not 2ms or others? if there are much events comming for 30s, and epoll_wait return quickly(linux) which less than 1ms in each circle, and the ngx_event_expire_timers will not be excecuted for 30s? -------------- next part -------------- An HTML attachment was scrubbed... URL: From neuronetv at gmail.com Fri Nov 3 11:01:33 2017 From: neuronetv at gmail.com (Anthony Griffiths) Date: Fri, 3 Nov 2017 11:01:33 +0000 Subject: streaming video to mobile phones In-Reply-To: References: <20171102185223.GG581@Romans-MacBook-Air.local> Message-ID: On Thu, Nov 2, 2017 at 7:19 PM, Iurii Medvedev wrote: > You can use nginx as video streaming server. I'm already using nginx > Its mean you can push or pull > to/from nginx. I'm already pulling from nginx > Create hls playlist and your player can stream video. this topic is nothing to do with a playlist, it's about live streaming video. please avoid top posting if you can From nginx-forum at forum.nginx.org Sat Nov 4 06:03:43 2017 From: nginx-forum at forum.nginx.org (minutero) Date: Sat, 04 Nov 2017 02:03:43 -0400 Subject: proxy_pass $arg_uri Do not decode URL Message-ID: <20eb5d227a90a21ee0871fa170b75c00.NginxMailingListEnglish@forum.nginx.org> nginx 1.12.2 I have a location rule like this to proxy external images: location /imagesproxy { resolver 8.8.8.8; proxy_pass $arg_uri; proxy_pass_request_headers off; } The problem is that nginx doesn't seems to decode the URI /imagesproxy?uri=http%3A%2F%2Fwww.example.com%2Fwp-content%2Fuploads%2F2015%2F07%2Fportada-otra-vez-tu-667x1000.png error.log: [error] 16159#16159: *3602114 invalid URL prefix in "http%3A%2F%2Fwww.alicekellen.com%2F Any ideas? Thank you! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277191,277191#msg-277191 From adam at anschwa.com Sun Nov 5 01:56:29 2017 From: adam at anschwa.com (Adam Schwartz) Date: Sat, 4 Nov 2017 21:56:29 -0400 Subject: random load balancer Message-ID: Hello, I?m experimenting with nginx module development by trying to implement a random load balancer. I see that *us->servers->nelts matches the upstream servers defined in nginx.conf However, something that?s confusing to me is where *us->elts[0]->naddrs comes from. My thinking was that I could chose a random integer and select a peer by indexing it. This doesn?t work reliably because the ?real? webservers are only accessible every two indexes such as: &peer[0] => foo.example, &peer[3] => bar.example, and &peer[6] => baz.example, etc. I?m having trouble finding why this is the case and any advice would be appreciated. Thanks! -Adam -------------- next part -------------- An HTML attachment was scrubbed... URL: From im_patriot at yahoo.com Mon Nov 6 12:53:06 2017 From: im_patriot at yahoo.com (Mohammad Puyandeh) Date: Mon, 6 Nov 2017 16:23:06 +0330 Subject: Proxy Request buffering not working as expected Message-ID: If I try to upload a 200M file using POST method, nginx will pass chunks to back-end in less than 3 meg chunks sample nginx config: worker_processes auto; user nginx; events { } http { ??? upstream servers { ??????? server??????????????? 127.0.0.1:9999; ??? } ?server { ?? listen 80; ?? client_max_body_size 200M; ?? client_body_buffer_size 200M; ?? server_name?????????? localhost; ??????? location / { ??????????? proxy_request_buffering?? on; ??????????? proxy_pass http://servers; ??????? } ?} } a quote about proxy_request_buffering? from docs: """When buffering is enabled, the entire request body is read from the client before sending the request to a proxied server""" but this behavior which I am facing is the opposite of this quote, any ideas what's going on ? From francis at daoine.org Mon Nov 6 14:28:03 2017 From: francis at daoine.org (Francis Daly) Date: Mon, 6 Nov 2017 14:28:03 +0000 Subject: Proxy Request buffering not working as expected In-Reply-To: References: Message-ID: <20171106142803.GC3127@daoine.org> On Mon, Nov 06, 2017 at 04:23:06PM +0330, Mohammad Puyandeh via nginx wrote: Hi there, > If I try to upload a 200M file using POST method, nginx will pass > chunks to back-end in less than 3 meg chunks proxy_request_buffering says nothing about chunks being sent to the back-end. > a quote about proxy_request_buffering? from docs: > > """When buffering is enabled, the entire request body is read from > the client before sending the request to a proxied server""" > > > but this behavior which I am facing is the opposite of this quote, You haven't explicitly reported any behaviour that contradicts this quote. > any ideas what's going on ? If you report that nginx starts to send the content to the back-end before nginx has received all of the content from the client, then proxy_request_buffering is not working as intended. When I try a quick test here, nginx receives the full content from the client before it tries to connect to the back-end. f -- Francis Daly francis at daoine.org From im_patriot at yahoo.com Mon Nov 6 15:43:53 2017 From: im_patriot at yahoo.com (Mohammad Puyandeh) Date: Mon, 6 Nov 2017 19:13:53 +0330 Subject: Proxy Request buffering not working as expected In-Reply-To: <20171106142803.GC3127@daoine.org> References: <20171106142803.GC3127@daoine.org> Message-ID: >> If you report that nginx starts to send the content to the back-end >> before nginx has received all of the content from the client, then >> proxy_request_buffering is not working as intended. Yes, that's what I am reporting, I put together a very simple tornado (python) script, I used both tcp connection and a unix socket, no differences I used curl and my own upload python client, result were the same >> When I try a quick test here, nginx receives the full content from the >> client before it tries to connect to the back-end. I prepared a minimal working sample which is easy to run and test this is python script: https://paste.ubuntu.com/25903714/ (see lines 16 to 23 to know how to run it, it's so simple, you need a Linux based OS and python) and this is nginx config: https://paste.ubuntu.com/25903719/ with the above python client you can easily produce the problem, it will print chunk sizes along with the number of requests, use a file larger than 3M to get better results to make it complete, here is an example to upload file using curl: curl --request POST --data-binary "@/file/path" http://localhost/upload/ This is my first time that I'm using this service, So I thought putting codes somewhere else is cleaner and better -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Nov 6 18:56:19 2017 From: francis at daoine.org (Francis Daly) Date: Mon, 6 Nov 2017 18:56:19 +0000 Subject: Proxy Request buffering not working as expected In-Reply-To: References: <20171106142803.GC3127@daoine.org> Message-ID: <20171106185619.GD3127@daoine.org> On Mon, Nov 06, 2017 at 07:13:53PM +0330, Mohammad Puyandeh via nginx wrote: Hi there, > >>If you report that nginx starts to send the content to the back-end > >>before nginx has received all of the content from the client, then > >>proxy_request_buffering is not working as intended. > Yes, that's what I am reporting, I'm not seeing it in my testing (but I am not using your not-nginx part). > >>When I try a quick test here, nginx receives the full content from the > >>client before it tries to connect to the back-end. > with the above python client you can easily produce the problem, it > will print chunk sizes along with the number of requests, use a file > larger than 3M to get better results "chunk size" and "number of requests" are not related to proxy_request_buffering. Only "the time at which nginx starts to send traffic to the back-end" is related to proxy_request_buffering.. > to make it complete, here is an example to upload file using curl: > curl --request POST --data-binary "@/file/path" > http://localhost/upload/ If you add "--limit-rate 16000" to that curl command, do you see output from your back-end server immediately, or only after enough time has passed that your client has sent all of the content to nginx? f -- Francis Daly francis at daoine.org From shanchuan04 at gmail.com Tue Nov 7 11:36:47 2017 From: shanchuan04 at gmail.com (yang chen) Date: Tue, 7 Nov 2017 19:36:47 +0800 Subject: why delta only include the execution time of ngx_process_events not ngx_event_process_posted (Zhang Chao) (Valentin V. Bartenev) Message-ID: Thank you very much, but there is another question, if delta larger than 1ms, it will invole the ngx_event_expire_timers, why not 2ms or others? how do you get the value? if there are much events comming for 30s, and epoll_wait return quickly (linux) which less than 1ms in each circle, and the ngx_event_expire_timers will not be excecuted for 30s? -------------- next part -------------- An HTML attachment was scrubbed... URL: From im_patriot at yahoo.com Tue Nov 7 14:34:26 2017 From: im_patriot at yahoo.com (Mohammad Puyandeh) Date: Tue, 7 Nov 2017 18:04:26 +0330 Subject: Proxy Request buffering not working as expected In-Reply-To: <20171106185619.GD3127@daoine.org> References: <20171106185619.GD3127@daoine.org> Message-ID: <6de67f01-8fa5-71cf-ec68-c89bd9f03867@yahoo.com> >>If you add "--limit-rate 16000" to that curl command, do you see output >>from your back-end server immediately, or only after enough time has >>passed that your client has sent all of the content to nginx? It's after, so Nginx buffers correctly here and I was mistaken but now the question is, this behavior is not much different than when proxy_request_buffering is set to off the whole point of using? nginx is to buffer request and pass request body to back-end server at once in a single chunk, but nginx pass it in small chunks, there is some other configuration for that ? how can we achieve this goal then ? From vbart at nginx.com Tue Nov 7 15:02:44 2017 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 07 Nov 2017 18:02:44 +0300 Subject: why delta only include the execution time of ngx_process_events not ngx_event_process_posted (Zhang Chao) (Valentin V. Bartenev) In-Reply-To: References: Message-ID: <1606954.RHvb121VqZ@vbart-laptop> On Tuesday, 7 November 2017 14:36:47 MSK yang chen wrote: > Thank you very much, but there is another question, if delta larger than > 1ms, it will invole the ngx_event_expire_timers, why not 2ms or others? how > do you get the value? > if there are much events comming for 30s, and epoll_wait return quickly > (linux) which less than 1ms in each circle, > and the ngx_event_expire_timers will not be excecuted for 30s? > ngx_event_expire_timers() doesn't care about delta at all. It uses absolute time and all timers have an absolute time when they have to be triggered. The function triggers all the timers with absolute time less or equal than the current time. This delta check is just an optimization. I suggest you to read the code of the function. It's hard to understand the logic, if you read only the part of it and don't have the whole picture in the mind. wbr, Valentin V. Bartenev From vbart at nginx.com Tue Nov 7 15:22:00 2017 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 07 Nov 2017 18:22:00 +0300 Subject: why delta only include the execution time of ngx_process_events not ngx_event_process_posted (Zhang Chao) (Valentin V. Bartenev) In-Reply-To: <1606954.RHvb121VqZ@vbart-laptop> References: <1606954.RHvb121VqZ@vbart-laptop> Message-ID: <3864273.jKc1CoCJJi@vbart-laptop> On Tuesday, 7 November 2017 18:02:44 MSK Valentin V. Bartenev wrote: > On Tuesday, 7 November 2017 14:36:47 MSK yang chen wrote: > > Thank you very much, but there is another question, if delta larger than > > 1ms, it will invole the ngx_event_expire_timers, why not 2ms or others? how > > do you get the value? > > if there are much events comming for 30s, and epoll_wait return quickly > > (linux) which less than 1ms in each circle, > > and the ngx_event_expire_timers will not be excecuted for 30s? > > > > ngx_event_expire_timers() doesn't care about delta at all. It uses absolute > time and all timers have an absolute time when they have to be triggered. > The function triggers all the timers with absolute time less or equal than > the current time. This delta check is just an optimization. > > I suggest you to read the code of the function. It's hard to understand the > logic, if you read only the part of it and don't have the whole picture in > the mind. > Or, maybe I misunderstand your question. ngx_current_msec is updated once in the loop. In short it looks like this: loop { delta = time; update(time); delta = time - delta; if (delta) { expire_timers(); } } Eventually, the time shift will be 1ms or more. wbr, Valentin V. Bartenev From shanchuan04 at gmail.com Wed Nov 8 02:24:59 2017 From: shanchuan04 at gmail.com (yang chen) Date: Wed, 8 Nov 2017 10:24:59 +0800 Subject: why delta only include the execution time of ngx_process_events not ngx_event_process_posted (Zhang Chao) (Valentin V. Bartenev) Message-ID: Thank you very much, I did't see your reply in my email list or I ignored it, so I send twice, sorry, I got it. Thank you. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From joel.parker.gm at gmail.com Wed Nov 8 02:31:15 2017 From: joel.parker.gm at gmail.com (Joel Parker) Date: Tue, 7 Nov 2017 20:31:15 -0600 Subject: Conditional $uri and html from file Message-ID: I am using lua to parse out the username of the posted form and if the username == user and password == password. I want to change the URI to http://www.somesite.com//forum/unauthorized.html otherwise, I want it just to do: proxy_pass http://$http_host$uri$is_args$args; (http://somesite.com.com/forum/ucp.php?mode=login) The unauthorized.html is located in /data/www/ on the nginx server. Here is my nginx.conf: http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; gzip on; server { listen 80; location / { resolver 8.8.8.8; lua_need_request_body on; content_by_lua_block { ngx.req.read_body(); local post_params = ngx.req.get_post_args(); local username; local password; if (post_params) then -- Iterate through post params for key,value in pairs(post_params) do if (key == "username") then username = value; elseif (key == "password") then password = value; end -- ngx.say(key," : ", value); end if (username and password) then -- ngx.say(username); -- ngx.say(password); if (username == "user" and password =="password") then *-- WHAT DO I DO HERE ?* end end end } proxy_pass http://$http_host$uri$is_args$args; } } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Nov 8 16:51:36 2017 From: nginx-forum at forum.nginx.org (keyun89) Date: Wed, 08 Nov 2017 11:51:36 -0500 Subject: Does anyone know how to configure the session inactivity timeout in Nginx ? Message-ID: <8ca98718a1d7194e33be58862a38258d.NginxMailingListEnglish@forum.nginx.org> Does anyone know how to configure the session inactivity timeout in Nginx ? Thanks, Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277214,277214#msg-277214 From francis at daoine.org Thu Nov 9 13:36:03 2017 From: francis at daoine.org (Francis Daly) Date: Thu, 9 Nov 2017 13:36:03 +0000 Subject: Proxy Request buffering not working as expected In-Reply-To: <6de67f01-8fa5-71cf-ec68-c89bd9f03867@yahoo.com> References: <20171106185619.GD3127@daoine.org> <6de67f01-8fa5-71cf-ec68-c89bd9f03867@yahoo.com> Message-ID: <20171109133603.GE3127@daoine.org> On Tue, Nov 07, 2017 at 06:04:26PM +0330, Mohammad Puyandeh via nginx wrote: Hi there, > >>If you add "--limit-rate 16000" to that curl command, do you see output > >>from your back-end server immediately, or only after enough time has > >>passed that your client has sent all of the content to nginx? > > It's after, so Nginx buffers correctly here and I was mistaken > > but now the question is, this behavior is not much different than > when proxy_request_buffering is set to off If you repeat the test with "proxy_request_buffering off" in nginx.conf, you should see that the back-end server starts receiving content sooner. That is pretty much the only thing that proxy_request_buffering is for. > the whole point of using? nginx is to buffer request and pass > request body to back-end server at once in a single chunk, but nginx > pass it in small chunks, there is some other configuration for that > ? how can we achieve this goal then ? What do you mean by "single chunk" and "small chunks"? Do you mean "nginx uses Transfer-Encoding: chunked in its request to upstream", or do you mean something else? Because if you mean "Transfer-Encoding: chunked", you can disable that by using HTTP/1.0 in the connection from nginx to upstream (http://nginx.org/r/proxy_http_version); but that is the default and you did not show it having been changed. f -- Francis Daly francis at daoine.org From joel.parker.gm at gmail.com Thu Nov 9 20:19:14 2017 From: joel.parker.gm at gmail.com (Joel Parker) Date: Thu, 9 Nov 2017 14:19:14 -0600 Subject: ngx.shared.DICT serialize / deserialize Message-ID: I am trying to load a table from disk (deserialize) into memory and then add, change, remove the values in the table then write it periodically back to disk (serialize). I looked at the documentation for the ngx.shared.DICT ( https://github.com/openresty/lua-nginx-module#ngxshareddict) and it seems like it would fit my needs but I do not see anywhere in the documentation how to load the table initially from disk and modify then write back to disk. Could someone provide a basic example of how I might accomplish this ? Joel -------------- next part -------------- An HTML attachment was scrubbed... URL: From joel.parker.gm at gmail.com Thu Nov 9 21:17:36 2017 From: joel.parker.gm at gmail.com (Joel Parker) Date: Thu, 9 Nov 2017 15:17:36 -0600 Subject: NGINX lifecycle Message-ID: I want to load a table of key/value pairs from the file system when nginx starts and not every time a request comes in. I am going to use the key/value pairs to compare against incoming post args in my location block. My question is how many times is init_by_lua_block called ? or is there somewhere else I should be loading the file ? server { init_by_lua_block { some_global_var = stuff from file io read; } location \ { ... } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Fri Nov 10 08:09:53 2017 From: francis at daoine.org (Francis Daly) Date: Fri, 10 Nov 2017 08:09:53 +0000 Subject: Does anyone know how to configure the session inactivity timeout in Nginx ? In-Reply-To: <8ca98718a1d7194e33be58862a38258d.NginxMailingListEnglish@forum.nginx.org> References: <8ca98718a1d7194e33be58862a38258d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20171110080953.GF3127@daoine.org> On Wed, Nov 08, 2017 at 11:51:36AM -0500, keyun89 wrote: Hi there, > Does anyone know how to configure the session inactivity timeout in Nginx ? There probably isn't a session inactivity timeout in nginx. There probably is the idea of a session in whatever (dynamic?) thing you are using to deal with sessions; that is the place to look for a timeout setting. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Fri Nov 10 08:19:28 2017 From: francis at daoine.org (Francis Daly) Date: Fri, 10 Nov 2017 08:19:28 +0000 Subject: Regex on Variable ($servername) In-Reply-To: <07ef84ca-f730-1279-d01a-6cf8febd53ef@unix-solution.de> References: <07ef84ca-f730-1279-d01a-6cf8febd53ef@unix-solution.de> Message-ID: <20171110081928.GG3127@daoine.org> On Sun, Oct 29, 2017 at 11:53:23AM +0100, basti wrote: Hi there, > In this example from nginx docs domain has "fullname". > > server { > ??? server_name ~^(www\.)?(*?*.+)$; > ??? root /sites/*$domain*; > } When I use the config server { server_name ~^(www\.)?(?.+)$; return 200 "domain is $domain\n"; } I get the message nginx: [emerg] pcre_compile() failed: unrecognized character after (?< in "^(www\.)?(?.+)$" at "domain>.+)$" in /usr/local/nginx/conf/nginx.conf because my PCRE version does not recognise that syntax. The page at http://nginx.org/en/docs/http/server_names.html does say: """ The PCRE library supports named captures using the following syntax: ? Perl 5.10 compatible syntax, supported since PCRE-7.0 ?'name' Perl 5.10 compatible syntax, supported since PCRE-7.0 ?P Python compatible syntax, supported since PCRE-4.0 If nginx fails to start and displays the error message: pcre_compile() failed: unrecognized character after (?< in ... this means that the PCRE library is old and the syntax ??P? should be tried instead. """ When I change the main line (by adding an extra P) to be server_name ~^(www\.)?(?P.+)$; then it all seems to work for me: $ curl -H Host:www.example.com http://localhost/ domain is example.com $ curl -H Host:example.com http://localhost/ domain is example.com $ curl -H Host:no.example.com http://localhost/ domain is no.example.com > servername: www.example.com -> $domain should be example.com It works for me. What output do you get instead of what you want to get? f -- Francis Daly francis at daoine.org From lazyvislee at gmail.com Fri Nov 10 09:08:55 2017 From: lazyvislee at gmail.com (Vis Lee) Date: Fri, 10 Nov 2017 17:08:55 +0800 Subject: The "worker process is shutting down" is running all the time, How should I do? Message-ID: Hi, The nginx is http proxy. when I use upgrade websocket and send heartbeat per 5s(client_body_timeout 6s;) the directives "worker_shutdown_timeout" is invalid, the "worker process is shutting down" produced by nginx -s reload is running all the time. How should I do? Regards, leevis -------------- next part -------------- An HTML attachment was scrubbed... URL: From lazyvislee at gmail.com Fri Nov 10 09:51:50 2017 From: lazyvislee at gmail.com (Vis Lee) Date: Fri, 10 Nov 2017 17:51:50 +0800 Subject: The "worker process is shutting down" is running all the time, How should I do? In-Reply-To: References: Message-ID: The nginx will add timer, the handler is as follows. The c[i].read->handler is not processing the 'close' and 'error' flag? ``` static void ngx_shutdown_timer_handler(ngx_event_t *ev) { ngx_uint_t i; ngx_cycle_t *cycle; ngx_connection_t *c; cycle = ev->data; c = cycle->connections; for (i = 0; i < cycle->connection_n; i++) { if (c[i].fd == (ngx_socket_t) -1 || c[i].read == NULL || c[i].read->accept || c[i].read->channel || c[i].read->resolver) { continue; } ngx_log_debug1(NGX_LOG_DEBUG_CORE, ev->log, 0, "*%uA shutdown timeout", c[i].number); c[i].close = 1; c[i].error = 1; c[i].read->handler(c[i].read); } } ``` 2017-11-10 17:08 GMT+08:00 Vis Lee : > > Hi, > > > The nginx is http proxy. when I use upgrade websocket and send heartbeat > per 5s(client_body_timeout 6s;) the directives "worker_shutdown_timeout" is > invalid, the "worker process is shutting down" produced by nginx -s reload > is running all the time. > > How should I do? > > Regards, > > leevis > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mailinglist at unix-solution.de Fri Nov 10 11:02:28 2017 From: mailinglist at unix-solution.de (basti) Date: Fri, 10 Nov 2017 12:02:28 +0100 Subject: Regex on Variable ($servername) In-Reply-To: <20171110081928.GG3127@daoine.org> References: <07ef84ca-f730-1279-d01a-6cf8febd53ef@unix-solution.de> <20171110081928.GG3127@daoine.org> Message-ID: <2357921f-3296-8c76-3995-27110d1cf437@unix-solution.de> Hello, the Problem are "multiple" subs. When I use your example i get curl -H Host:foo.www.example.com http://localhost/ domain is foo.www.example.com But I need example.com like here https://regex101.com/r/9h3D77/1 Best Regards On 10.11.2017 09:19, Francis Daly wrote: > On Sun, Oct 29, 2017 at 11:53:23AM +0100, basti wrote: > > Hi there, > >> In this example from nginx docs domain has "fullname". >> >> server { >> ??? server_name ~^(www\.)?(*?*.+)$; >> ??? root /sites/*$domain*; >> } > > When I use the config > > server { > server_name ~^(www\.)?(?.+)$; > return 200 "domain is $domain\n"; > } > > I get the message > > nginx: [emerg] pcre_compile() failed: unrecognized character > after (?< in "^(www\.)?(?.+)$" at "domain>.+)$" in > /usr/local/nginx/conf/nginx.conf > > because my PCRE version does not recognise that syntax. > > The page at http://nginx.org/en/docs/http/server_names.html does say: > > """ > The PCRE library supports named captures using the following syntax: > > ? Perl 5.10 compatible syntax, supported since PCRE-7.0 > ?'name' Perl 5.10 compatible syntax, supported since PCRE-7.0 > ?P Python compatible syntax, supported since PCRE-4.0 > If nginx fails to start and displays the error message: > pcre_compile() failed: unrecognized character after (?< in ... > this means that the PCRE library is old and the syntax ??P? should be tried instead. > """ > > When I change the main line (by adding an extra P) to be > > server_name ~^(www\.)?(?P.+)$; > > then it all seems to work for me: > > $ curl -H Host:www.example.com http://localhost/ > domain is example.com > $ curl -H Host:example.com http://localhost/ > domain is example.com > $ curl -H Host:no.example.com http://localhost/ > domain is no.example.com > >> servername: www.example.com -> $domain should be example.com > > It works for me. What output do you get instead of what you want to get? > > f > From mdounin at mdounin.ru Fri Nov 10 11:40:22 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 10 Nov 2017 14:40:22 +0300 Subject: NGINX lifecycle In-Reply-To: References: Message-ID: <20171110114022.GK26836@mdounin.ru> Hello! On Thu, Nov 09, 2017 at 03:17:36PM -0600, Joel Parker wrote: > I want to load a table of key/value pairs from the file system when nginx > starts and not every time a request comes in. I am going to use the > key/value pairs to compare against incoming post args in my location block. > > My question is how many times is init_by_lua_block called ? or is there > somewhere else I should be loading the file ? > > server { > init_by_lua_block { > some_global_var = stuff from file io read; > } > > location \ { > ... > } > } For key-value pairs there is the map module in nginx, there is no need to use 3rd party modules. See http://nginx.org/en/docs/http/ngx_http_map_module.html for details. -- Maxim Dounin http://mdounin.ru/ From peter_booth at me.com Fri Nov 10 12:39:04 2017 From: peter_booth at me.com (Peter Booth) Date: Fri, 10 Nov 2017 07:39:04 -0500 Subject: Does anyone know how to configure the session inactivity timeout in Nginx ? In-Reply-To: <20171110080953.GF3127@daoine.org> References: <8ca98718a1d7194e33be58862a38258d.NginxMailingListEnglish@forum.nginx.org> <20171110080953.GF3127@daoine.org> Message-ID: This is true in general, but with a single exception that I know of. It?s common for nginx to proxy requests to a Rails app or Java app on an app server and for the app server to implement the session logic This is an open-resty session implementation that sits within the nginx process. https://github.com/bungle/lua-resty-session > On Nov 10, 2017, at 3:09 AM, Francis Daly wrote: > > On Wed, Nov 08, 2017 at 11:51:36AM -0500, keyun89 wrote: > > Hi there, > >> Does anyone know how to configure the session inactivity timeout in Nginx ? > > There probably isn't a session inactivity timeout in nginx. > > There probably is the idea of a session in whatever (dynamic?) thing > you are using to deal with sessions; that is the place to look for a > timeout setting. > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Fri Nov 10 14:04:01 2017 From: francis at daoine.org (Francis Daly) Date: Fri, 10 Nov 2017 14:04:01 +0000 Subject: Regex on Variable ($servername) In-Reply-To: <2357921f-3296-8c76-3995-27110d1cf437@unix-solution.de> References: <07ef84ca-f730-1279-d01a-6cf8febd53ef@unix-solution.de> <20171110081928.GG3127@daoine.org> <2357921f-3296-8c76-3995-27110d1cf437@unix-solution.de> Message-ID: <20171110140401.GH3127@daoine.org> On Fri, Nov 10, 2017 at 12:02:28PM +0100, basti wrote: Hi there, > the Problem are "multiple" subs. > > When I use your example i get > > curl -H Host:foo.www.example.com http://localhost/ > domain is foo.www.example.com Yes - the example config you gave very specifically ignores "www." at the start of the Host: header, and sets $domain to the rest. If you want a different part of the Host header, you must decide what exactly you want, and write a config that does that. > But I need example.com like here https://regex101.com/r/9h3D77/1 I think that that says you want the capturing regex (\w+\.\w+)$ so that you will keep the last two dot-separated parts, such that www.example.com becomes example.com and www.example.co.uk becomes co.uk Without testing, I'd guess that server_name ~ (?\w+\.\w+)$; probably comes close to what you want, assuming that your regex engine inside your nginx supports \w. f -- Francis Daly francis at daoine.org From erik.l.nelson at bankofamerica.com Fri Nov 10 14:08:58 2017 From: erik.l.nelson at bankofamerica.com (Nelson, Erik - 2) Date: Fri, 10 Nov 2017 14:08:58 +0000 Subject: NGINX lifecycle In-Reply-To: <20171110114022.GK26836@mdounin.ru> References: <20171110114022.GK26836@mdounin.ru> Message-ID: <9FB8528595F3BE4E9D4AAB664B7A500D2285A59A@smtp_mail.bankofamerica.com> > On Thu, Nov 09, 2017 at 03:17:36PM -0600, Joel Parker wrote: > > > I want to load a table of key/value pairs from the file system when nginx > > starts and not every time a request comes in. I am going to use the > > key/value pairs to compare against incoming post args in my location block. > > > > My question is how many times is init_by_lua_block called ? or is there > > somewhere else I should be loading the file ? Looks like once to me. https://github.com/openresty/lua-nginx-module#init_worker_by_lua_block says Runs the specified Lua code upon every Nginx worker process's startup when the master process is enabled. When the master process is disabled, this hook will just run after init_by_lua*. Also, there's an openresty-en mailing list, that might be a better place for openresty-specific questions. ---------------------------------------------------------------------- This message, and any attachments, is for the intended recipient(s) only, may contain information that is privileged, confidential and/or proprietary and subject to important terms and conditions available at http://www.bankofamerica.com/emaildisclaimer. If you are not the intended recipient, please delete this message. From nginx-forum at forum.nginx.org Fri Nov 10 16:36:26 2017 From: nginx-forum at forum.nginx.org (rihad) Date: Fri, 10 Nov 2017 11:36:26 -0500 Subject: Enabling both gzip & brotli Message-ID: <1f5c4217d44c339a19affdd40194eef2.NginxMailingListEnglish@forum.nginx.org> Hello. Can I enable both brotli & gzip? brotli on; gzip on; with the idea to support both newer & older clients, but still not do any kind of double compression. Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277282,277282#msg-277282 From nginx-forum at forum.nginx.org Fri Nov 10 19:16:53 2017 From: nginx-forum at forum.nginx.org (shiz) Date: Fri, 10 Nov 2017 14:16:53 -0500 Subject: Enabling both gzip & brotli In-Reply-To: <1f5c4217d44c339a19affdd40194eef2.NginxMailingListEnglish@forum.nginx.org> References: <1f5c4217d44c339a19affdd40194eef2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2c2e4cff5f08f0c9a9f8fa9e7bafae9e.NginxMailingListEnglish@forum.nginx.org> Yes, but I prefer to generate the *.br first and use brotli_static on; instead The browser will happily download the *.br if supported; otherwise gzip will be selected. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277282,277284#msg-277284 From dewanggaba at xtremenitro.org Fri Nov 10 21:01:26 2017 From: dewanggaba at xtremenitro.org (Dewangga Bachrul Alam) Date: Sat, 11 Nov 2017 04:01:26 +0700 Subject: Enabling both gzip & brotli In-Reply-To: <1f5c4217d44c339a19affdd40194eef2.NginxMailingListEnglish@forum.nginx.org> References: <1f5c4217d44c339a19affdd40194eef2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2d01a446-0681-d40a-11e4-81d51762d607@xtremenitro.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Hello! On 11/10/2017 11:36 PM, rihad wrote: > Hello. Can I enable both brotli & gzip? Yeah sure, I was test it using this module https://github.com/google/ngx_brotli > brotli on; gzip on; > > with the idea to support both newer & older clients, but still not > do any kind of double compression. If the browser didn't support "br" yet, it should be fallback to gzip. > Thanks. > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,277282,277282#msg-277282 > > _______________________________________________ nginx mailing list > nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx > -----BEGIN PGP SIGNATURE----- iQJPBAEBCAA5FiEEZpxiw/Jg6pEte5xQ5X/SIKAozXAFAloGE6IbHGRld2FuZ2dh YmFAeHRyZW1lbml0cm8ub3JnAAoJEOV/0iCgKM1wJVMQAIJb8ulMQwpUn68kch42 Ru4RWjScnbVfzSDOqF4zwnHJfARGyhfzpRHC5YCf7DboF6StAonacFbqb1Z/QSPI zeVzHlt9rQt/EKDm2quuw3ctONDdyLCwCQjv0BXS4w7nAYdUorH1azBIQv3eURQX NWTinCRO0lhZynyMrW9BKWHjgSxdsx38uUwZiplxF8fTlY/wVW//bw2DVoGPyOVs LoFqPp0G2aaVT68P1z6eVTD5gAkl7jEzeufuGsPxu+i/3iSMnZUAgMcf1SrR9UjK OlwweR+y1MUdsq1b3ZcKPSKOoOEaowOT5qbILlJ3Wk+NlZjdNsVUxQ9JisfRb4EI FfJVaku6Te7ibbL2wnxh+TRlUuQ6fh/3RdH9tEQpK1nfWQWER69sbZma5ld7DCF8 NsmAMEtkse/dNlB1+Uvbmo3ahaTdYwgwPq9RXLQAfOQoYAItkx0vZm1Gd0T91X/I XgFqZoWgbA4ciOSd2JlDpOImqnUM0w2+/U7gUT5BwTgVhX0xXftAbeRseENKmXEL z6ikPRubTHRMMLSD6fw/1GHoO5tx1PHhR6EWQtzrUDPvvfon6y18TcxAZMjnvuJq Ewm1lX2L5/JZSI32x9CmdtsQ4TeDl5jn1Aer9zH4LMygrsLlhxBwUK5m12S7i2cB L7oVAyjlSApkXHu9azOQ2Vk3 =m8pM -----END PGP SIGNATURE----- From agentzh at gmail.com Sat Nov 11 21:30:36 2017 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Sat, 11 Nov 2017 13:30:36 -0800 Subject: ngx.shared.DICT serialize / deserialize In-Reply-To: References: Message-ID: Hello! On Thu, Nov 9, 2017 at 12:19 PM, Joel Parker wrote: > I am trying to load a table from disk (deserialize) into memory and then > add, change, remove the values in the table then write it periodically back > to disk (serialize). I looked at the documentation for the ngx.shared.DICT > (https://github.com/openresty/lua-nginx-module#ngxshareddict) and it seems > like it would fit my needs but I do not see anywhere in the documentation > how to load the table initially from disk and modify then write back to > disk. Could someone provide a basic example of how I might accomplish this ? > You need to do serialization and persistence support yourself (like using lua-cjson/lua-msgpack for serialization and the standard Lua io module for file reads/writes). These are not built into the shdict API directly. BTW, better post such questions to the openresty-en mailing list: https://openresty.org/en/community.html Best regards, Yichun From hemelaar at desikkel.nl Sun Nov 12 11:03:47 2017 From: hemelaar at desikkel.nl (Jean-Paul Hemelaar) Date: Sun, 12 Nov 2017 12:03:47 +0100 Subject: Different Naxsi rulesets Message-ID: Hi! I'm using Nginx together with Naxsi; so not sure it this is the correct place for this post, but I'll give it a try. I want to configure two detection thresholds: a strict detection threshold for 'far away countries', and a less-strict set for local countries. I'm using a setup like: location /strict/ { include /usr/local/nginx/naxsi.rules.strict; proxy_pass http://app-server/; } location /not_so_strict/ { include /usr/local/nginx/naxsi.rules.not_so_strict; proxy_pass http://app-server/; } location / { # REMOVED BUT THIS WORKS: # include /usr/local/nginx/naxsi.rules.not_so_strict; set $ruleSet "strict"; if ( $geoip_country_code ~ (TRUSTED_CC_1|TRUSTED_CC_2TRUSTED_CC_3) ) { set $ruleSet "not_so_strict"; } rewrite ^(.*)$ /$ruleSet$1 last; } location /RequestDenied { return 403; } The naxsi.rules.strict file contains the check rules: CheckRule "$SQL >= 8" BLOCK; etc. For some reason this doesn't work. The syntax is ok, and I can reload Nginx. However the firewall never triggers. If I uncomment the include in the location-block / it works perfectly. Any idea's why this doesn't work, or any better setup to use different rulesets based on some variables? Thanks, JP -------------- next part -------------- An HTML attachment was scrubbed... URL: From arozyev at nginx.com Sun Nov 12 13:34:08 2017 From: arozyev at nginx.com (Aziz Rozyev) Date: Sun, 12 Nov 2017 16:34:08 +0300 Subject: Different Naxsi rulesets In-Reply-To: References: Message-ID: at least you?re missing or (|) operator between > TRUSTED_CC_2 and TRUSTED_CC_3 br, Aziz. > On 12 Nov 2017, at 14:03, Jean-Paul Hemelaar wrote: > > Hi! > > I'm using Nginx together with Naxsi; so not sure it this is the correct place for this post, but I'll give it a try. > > I want to configure two detection thresholds: a strict detection threshold for 'far away countries', and a less-strict set > for local countries. I'm using a setup like: > > location /strict/ { > include /usr/local/nginx/naxsi.rules.strict; > > proxy_pass http://app-server/; > } > > location /not_so_strict/ { > include /usr/local/nginx/naxsi.rules.not_so_strict; > > proxy_pass http://app-server/; > } > > location / { > # REMOVED BUT THIS WORKS: > # include /usr/local/nginx/naxsi.rules.not_so_strict; > set $ruleSet "strict"; > if ( $geoip_country_code ~ (TRUSTED_CC_1|TRUSTED_CC_2TRUSTED_CC_3) ) { > set $ruleSet "not_so_strict"; > } > > rewrite ^(.*)$ /$ruleSet$1 last; > } > > location /RequestDenied { > return 403; > } > > > The naxsi.rules.strict file contains the check rules: > CheckRule "$SQL >= 8" BLOCK; > etc. > > For some reason this doesn't work. The syntax is ok, and I can reload Nginx. However the firewall never triggers. If I uncomment the include in the location-block / it works perfectly. > Any idea's why this doesn't work, or any better setup to use different rulesets based on some variables? > > Thanks, > > JP > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From hemelaar at desikkel.nl Sun Nov 12 14:16:23 2017 From: hemelaar at desikkel.nl (Jean-Paul Hemelaar) Date: Sun, 12 Nov 2017 15:16:23 +0100 Subject: Different Naxsi rulesets Message-ID: Hi Aziz, True; this got lost during my copy-anonymize-paste process. The real config doesn't have this. Thanks so far, JP On Sun, Nov 12, 2017 at 2:34 PM, Aziz Rozyev wrote: > at least you?re missing or (|) operator between > > > TRUSTED_CC_2 and TRUSTED_CC_3 > > > > br, > Aziz. > > > > > > > On 12 Nov 2017, at 14:03, Jean-Paul Hemelaar > wrote: > > > > Hi! > > > > I'm using Nginx together with Naxsi; so not sure it this is the correct > place for this post, but I'll give it a try. > > > > I want to configure two detection thresholds: a strict detection > threshold for 'far away countries', and a less-strict set > > for local countries. I'm using a setup like: > > > > location /strict/ { > > include /usr/local/nginx/naxsi.rules.strict; > > > > proxy_pass http://app-server/; > > } > > > > location /not_so_strict/ { > > include /usr/local/nginx/naxsi.rules.not_so_strict; > > > > proxy_pass http://app-server/; > > } > > > > location / { > > # REMOVED BUT THIS WORKS: > > # include /usr/local/nginx/naxsi.rules.not_so_strict; > > set $ruleSet "strict"; > > if ( $geoip_country_code ~ (TRUSTED_CC_1|TRUSTED_CC_2TRUSTED_CC_3) > ) { > > set $ruleSet "not_so_strict"; > > } > > > > rewrite ^(.*)$ /$ruleSet$1 last; > > } > > > > location /RequestDenied { > > return 403; > > } > > > > > > The naxsi.rules.strict file contains the check rules: > > CheckRule "$SQL >= 8" BLOCK; > > etc. > > > > For some reason this doesn't work. The syntax is ok, and I can reload > Nginx. However the firewall never triggers. If I uncomment the include in > the location-block / it works perfectly. > > Any idea's why this doesn't work, or any better setup to use different > rulesets based on some variables? > > > > Thanks, > > > > JP > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From shanchuan04 at gmail.com Sun Nov 12 16:38:51 2017 From: shanchuan04 at gmail.com (yang chen) Date: Mon, 13 Nov 2017 00:38:51 +0800 Subject: SIGIO only mean readable or writable, how channel event avoid writable Message-ID: [image: ???? 1] in the "Linux System Programming: Talking Directly to the Kernel and C Library", it says SIGIO mean readable or writable, and in man page it says SIGIO means I/O is possible on a descriptor, so if this, I'm curious that channel is writable, nginx will receive the SIGIO? -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 321901 bytes Desc: not available URL: From nginx-forum at forum.nginx.org Mon Nov 13 01:44:54 2017 From: nginx-forum at forum.nginx.org (JJJoshy) Date: Sun, 12 Nov 2017 20:44:54 -0500 Subject: Nginx proxy with rewrite and limit_except Message-ID: <530d0f7d692e438c7e003060c3cbb827.NginxMailingListEnglish@forum.nginx.org> Would any have solutions to the problem described here: https://stackoverflow.com/questions/47255564/nginx-proxy-with-rewrite-and-limit-except-not-working Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277298,277298#msg-277298 From zchao1995 at gmail.com Mon Nov 13 02:11:02 2017 From: zchao1995 at gmail.com (Zhang Chao) Date: Sun, 12 Nov 2017 18:11:02 -0800 Subject: SIGIO only mean readable or writable, how channel event avoid writable In-Reply-To: References: Message-ID: Hi! The channel, on the worker process land, is readonly, since the write end was closed. For the SIGIO, which is just ignored by the worker while just affects a global variable ngx_sigio on the master land, though it is never used. However, for generating the signal SIGIO, you shall call the fcntl and set the asynchronous mode, but the master doesn?t do that. So i think SIGIO will not be passed when the channel is writable, please correct it if anywhere is improper :) On 13 November 2017 at 00:39:08, yang chen (shanchuan04 at gmail.com) wrote: [image: ???? 1] in the "Linux System Programming: Talking Directly to the Kernel and C Library", it says SIGIO mean readable or writable, and in man page it says SIGIO means I/O is possible on a descriptor, so if this, I'm curious that channel is writable, nginx will receive the SIGIO? _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ii_15fb112fdac2408c Type: application/octet-stream Size: 321901 bytes Desc: not available URL: From shanchuan04 at gmail.com Mon Nov 13 02:41:25 2017 From: shanchuan04 at gmail.com (yang chen) Date: Mon, 13 Nov 2017 10:41:25 +0800 Subject: SIGIO only mean readable or writable", how channel event, "avoid writable (Zhang Chao) Message-ID: Thank you for you reply, but the channel each side can read or wirte not like pipe, so it's not readonly but readable and writable, please correct it if anywhere is improper. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Nov 13 09:59:29 2017 From: nginx-forum at forum.nginx.org (justink101) Date: Mon, 13 Nov 2017 04:59:29 -0500 Subject: Track egress bandwidth for a server block Message-ID: <95c7df0554d3b0b5c5fee14c43a443a3.NginxMailingListEnglish@forum.nginx.org> Is there a way to measure and store the amount of egress bandwidth in GB a given server{} block uses in a certain amount of days? Needs to be somewhat performant. Using NGINX Unit or Lua are both possible, just no idea how to implement it. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277303,277303#msg-277303 From r at roze.lv Mon Nov 13 10:34:41 2017 From: r at roze.lv (Reinis Rozitis) Date: Mon, 13 Nov 2017 12:34:41 +0200 Subject: Track egress bandwidth for a server block In-Reply-To: <95c7df0554d3b0b5c5fee14c43a443a3.NginxMailingListEnglish@forum.nginx.org> References: <95c7df0554d3b0b5c5fee14c43a443a3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <005701d35c6b$03fcedc0$0bf6c940$@roze.lv> > Is there a way to measure and store the amount of egress bandwidth in GB a > given server{} block uses in a certain amount of days? Needs to be somewhat > performant. Using NGINX Unit or Lua are both possible, just no idea how to > implement it. Take a look at https://github.com/vozlt/nginx-module-vts rr From arozyev at nginx.com Mon Nov 13 13:14:51 2017 From: arozyev at nginx.com (Aziz Rozyev) Date: Mon, 13 Nov 2017 16:14:51 +0300 Subject: Different Naxsi rulesets In-Reply-To: References: Message-ID: At first glance config looks correct, so probably it?s something with naxi rulesets. Btw, why don?t you use maps? map $geoip_coutnry_code $strictness { default ?strict"; CC_1 ?not-so-strict"; CC_2 ?not-so-strict"; # .. more country codes; } # strict and not-so-strict locations map $strictness $path { "strict? "/strict/"; "not-so-strict? "/not-so-strict/?; } location / { return 302 $path; # .. } br, Aziz. > On 12 Nov 2017, at 14:03, Jean-Paul Hemelaar wrote: > > T THIS WORKS: > # include /usr/local/n From mdounin at mdounin.ru Mon Nov 13 13:52:54 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 13 Nov 2017 16:52:54 +0300 Subject: The "worker process is shutting down" is running all the time, How should I do? In-Reply-To: References: Message-ID: <20171113135254.GO26836@mdounin.ru> Hello! On Fri, Nov 10, 2017 at 05:08:55PM +0800, Vis Lee wrote: > Hi, > > > The nginx is http proxy. when I use upgrade websocket and send heartbeat > per 5s(client_body_timeout 6s;) the directives "worker_shutdown_timeout" is > invalid, the "worker process is shutting down" produced by nginx -s reload > is running all the time. > > How should I do? What do you mean by "the directives "worker_shutdown_timeout" is invalid"? It does not work for you? It looks like WebSocket proxying currently fails to handle worker_shutdown_timeout requests to close a connection, please try the following patch: diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -3314,6 +3314,11 @@ ngx_http_upstream_process_upgraded(ngx_h return; } + if (upstream->close || downstream->close) { + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); + return; + } + if (from_upstream) { src = upstream; dst = downstream; -- Maxim Dounin http://mdounin.ru/ From shanchuan04 at gmail.com Mon Nov 13 16:10:03 2017 From: shanchuan04 at gmail.com (yang chen) Date: Tue, 14 Nov 2017 00:10:03 +0800 Subject: SIGIO only mean readable or writable, how channel event, avoid writable Message-ID: I write a c source file, and test in my machine. At the beginning, the fd is writable when the fd is open but the process doesn't receive the SIGIO. so i'm confused a lot of paper or books say that the process will receive SIGIO when the fd is writable or readable. but in fact it doesn't. so any idea? #include #include #include #include #include #include #include #include #include #include void ngx_signal_handler(int signo, siginfo_t *siginfo, void *ucontext){ printf("%d\n", signo); } int main() { int sv[2]; if(socketpair(PF_LOCAL,SOCK_STREAM,0,sv) < 0) { perror("socketpair"); return 0; } struct sigaction sa; memset(&sa, 0, sizeof(struct sigaction)); sa.sa_sigaction = ngx_signal_handler; sa.sa_flags = SA_SIGINFO; sigaction(SIGIO, &sa, NULL); pid_t id = fork(); if(id == 0) { close(sv[0]); const char* msg = "i'm child\n"; char buf[1024]; while(1) { /*write(sv[1],msg,strlen(msg));*/ sleep(1); /*ssize_t _s = read(sv[1],buf,sizeof(buf)-1);*/ /*if(_s > 0)*/ /*{*/ /* buf[_s] = '\0';*/ /* printf(" %s\n",buf);*/ /*}*/ } } else //father { close(sv[1]); const char* msg = "i'm father\n"; char buf[1024]; int on = 1; if (ioctl(sv[0], FIOASYNC, &on) == -1) { return 1; } if (fcntl(sv[0], F_SETOWN, getpid()) == -1) { return 1; } while(1){ /*ssize_t _s = read(sv[0],buf,sizeof(buf)-1);*/ /*if(_s > 0)*/ /*{*/ /* buf[_s] = '\0';*/ /* printf(" %s\n",buf);*/ /* sleep(1);*/ /*}*/ /*write(sv[0],msg,strlen(msg));*/ } } return 0; } -------------- next part -------------- An HTML attachment was scrubbed... URL: From hemelaar at desikkel.nl Mon Nov 13 18:47:09 2017 From: hemelaar at desikkel.nl (Jean-Paul Hemelaar) Date: Mon, 13 Nov 2017 19:47:09 +0100 Subject: Different Naxsi rulesets In-Reply-To: References: Message-ID: Hi, I have updated the config to use 'map' instead of the if-statements. That's indeed a better way. The problem however remains: - Naxsi mainrules are in the http-block - Config similar to: map $geoip_country_code $ruleSetCC { default "strict"; CC1 "relaxed"; CC2 "relaxed"; } location /strict/ { include /usr/local/nginx/naxsi.rules.strict; proxy_pass http://app-server/; } location /relaxed/ { include /usr/local/nginx/naxsi.rules.relaxed; proxy_pass http://app-server/; } location / { include /usr/local/nginx/naxsi.rules.default; set $ruleSet $ruleSetCC; rewrite ^(.*)$ /$ruleSet$1 last; } It's always using naxsi.rules.default. If this line is removed it's not using any rules (pass-all). Thanks so far! JP On Mon, Nov 13, 2017 at 2:14 PM, Aziz Rozyev wrote: > At first glance config looks correct, so probably it?s something with naxi > rulesets. > Btw, why don?t you use maps? > > map $geoip_coutnry_code $strictness { > default ?strict"; > CC_1 ?not-so-strict"; > CC_2 ?not-so-strict"; > # .. more country codes; > } > > # strict and not-so-strict locations > > map $strictness $path { > "strict? "/strict/"; > "not-so-strict? "/not-so-strict/?; > } > > location / { > return 302 $path; > # .. > } > > > br, > Aziz. > > > > > > > On 12 Nov 2017, at 14:03, Jean-Paul Hemelaar > wrote: > > > > T THIS WORKS: > > # include /usr/local/n > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Nov 13 19:05:33 2017 From: nginx-forum at forum.nginx.org (justink101) Date: Mon, 13 Nov 2017 14:05:33 -0500 Subject: Track egress bandwidth for a server block In-Reply-To: <005701d35c6b$03fcedc0$0bf6c940$@roze.lv> References: <005701d35c6b$03fcedc0$0bf6c940$@roze.lv> Message-ID: Thanks for linking, but nginx-module-vts seems over-kill and I'm concerned about performance. Essentially we are building a product that charges by egress bandwidth and looking for a way to track it at the NGINX level. I was digging a bit further and it seems like using https://www.nginx.com/blog/launching-nginscript-and-looking-ahead/ might be a good solution. Anybody tried this? Anybody have a starting point configuration? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277303,277313#msg-277313 From arozyev at nginx.com Mon Nov 13 19:30:30 2017 From: arozyev at nginx.com (Aziz Rozyev) Date: Mon, 13 Nov 2017 22:30:30 +0300 Subject: Different Naxsi rulesets In-Reply-To: References: Message-ID: hello, how about logs? does naxisi provide any variables that can be monitored? so far it seems that your rules in ?strict|relaxed? are not triggering, the ?default? one will always hit (as expected), as it?s first location ?/? from where you route to other 2 locations. also, try to log in debug mode, may be that will give more insights. br, Aziz. > On 13 Nov 2017, at 21:47, Jean-Paul Hemelaar wrote: > > Hi, > > I have updated the config to use 'map' instead of the if-statements. That's indeed a better way. > The problem however remains: > > - Naxsi mainrules are in the http-block > - Config similar to: > > map $geoip_country_code $ruleSetCC { > default "strict"; > CC1 "relaxed"; > CC2 "relaxed"; > } > > location /strict/ { > include /usr/local/nginx/naxsi.rules.strict; > > proxy_pass http://app-server/; > } > > location /relaxed/ { > include /usr/local/nginx/naxsi.rules.relaxed; > > proxy_pass http://app-server/; > } > > location / { > include /usr/local/nginx/naxsi.rules.default; > > set $ruleSet $ruleSetCC; > rewrite ^(.*)$ /$ruleSet$1 last; > } > > > It's always using naxsi.rules.default. If this line is removed it's not using any rules (pass-all). > > Thanks so far! > > JP > > > > > > On Mon, Nov 13, 2017 at 2:14 PM, Aziz Rozyev wrote: > At first glance config looks correct, so probably it?s something with naxi rulesets. > Btw, why don?t you use maps? > > map $geoip_coutnry_code $strictness { > default ?strict"; > CC_1 ?not-so-strict"; > CC_2 ?not-so-strict"; > # .. more country codes; > } > > # strict and not-so-strict locations > > map $strictness $path { > "strict? "/strict/"; > "not-so-strict? "/not-so-strict/?; > } > > location / { > return 302 $path; > # .. > } > > > br, > Aziz. > > > > > > > On 12 Nov 2017, at 14:03, Jean-Paul Hemelaar wrote: > > > > T THIS WORKS: > > # include /usr/local/n > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From agentzh at gmail.com Mon Nov 13 19:39:17 2017 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Mon, 13 Nov 2017 11:39:17 -0800 Subject: [ANN] OpenResty 1.13.6.1 released Message-ID: Hi there, I am excited to announce the new formal release, 1.13.6.1, of the OpenResty web platform based on NGINX and LuaJIT: https://openresty.org/en/download.html Both the (portable) source code distribution, the Win32 binary distribution, and the pre-built binary Linux packages for all those common Linux distributions are provided on this Download page. Special thanks go to all our developers and contributors! And thanks OpenResty Inc. for sponsoring a lot of the OpenResty core development work. We have the following highlights in this release: 1. Based on the latest mainline nginx core 1.13.6. 2. Included new component ngx_stream_lua_module which can do nginx TCP servers with Lua: https://github.com/openresty/stream-lua-nginx-module#readme 3. New ttl(), expire(), free_space(), and capacity() Lua methods for lua_shared_dict objects: https://github.com/openresty/lua-nginx-module/#ngxshareddictttl 4. New resty.limit.count module in lua-resty-limit-traffic for doing GitHub API style limiting: https://github.com/openresty/lua-resty-limit-traffic/blob/master/lib/resty/limit/count.md#readme 5. Added JIT controlling command-line options to the resty command-line utility: https://github.com/openresty/resty-cli#synopsis 6. Wildcard support in the more_clear_input_headers directive: https://github.com/openresty/headers-more-nginx-module/#more_clear_headers 7. HTTP/2 support in the opm client tool (through curl). The complete change log since the last (formal) release, 1.11.2.5: * upgraded the Nginx core to 1.13.6. * see the changes here: http://nginx.org/en/CHANGES * bundled the new component, ngx_stream_lua_module 0.0.4, which is also enabled by default. One can disable this 3rd-party Nginx C module by passing "--without-stream_lua_module" to the "./configure" script. We provide compatible Lua API with ngx_lua wherever it makes sense. Currently we support content_by_lua*, preread_by_lua* (similar to ngx_lua's access_by_lua* ), log_by_lua*, and balancer_by_lua* in the stream subsystem. thanks Mashape Inc. for sponsoring the OpenResty Inc. team to do the development work on rewriting ngx_stream_lua for recent nginx core version. * change: applied a patch to the nginx core to make sure the "server" header in HTTP/2 response shows "openresty" when the "server_tokens" diretive is turned off. * feature: added nginx core patches needed by(https://github.com/openresty/stream-lua-nginx-module)/'s balancer_by_lua*. * win32: upgraded PCRE to 8.41. * upgraded ngx_lua to 0.10.11. * feature: shdict: added pure C API for getting free page size and total capacity for lua-resty-core. thanks Hiroaki Nakamura for the patch. * feature: added pure C functions for shdict:ttl() and shdict:expire() API functions. thanks Thibault Charbonnier for the patch. * bugfix: *_by_lua_block directives might break nginx config dump ("-T" switch). thanks Oleg A. Mamontov for the patch. * bugfix: segmentation faults might happen when pipelined http requests are used in the downsteram connection. thanks Gao Yan for the report. * bugfix: the ssl connections might be drained and reused prematurely when ssl_certificate_by_lua* or ssl_session_fetch_by_lua* were used. this might lead to segmentation faults under load. thanks guanglinlv for the report and the original patch. * bugfix: tcpsock:connect(): when the nginx resolver's "send()" immediately fails without yielding, we didn't clean up the coroutine ctx state properly. This might lead to segmentation faults. thanks xiaocang for the report and root for the patch. * bugfix: added fallthrough comment to silence GCC 7's "-Wimplicit-fallthrough". thanks Andriy Kornatskyy for the report and spacewander for the patch. * bugfix: tcpsock:settimeout, tcpsock:settimeouts: throws an error when the timeout argument values overflow. Here we only support timeout values no greater than the max value of a 32 bits integer. thanks spacewander for the patch. * doc: added "413 Request Entity Too Large" to the possible short circuit response list. thanks Datong Sun for the patch. * upgraded lua-resty-core to 0.1.13. * feature: ngx.balancer now supports the ngx_stream_lua; also disabled all the other FFI APIs for the stream subsystem for now. * feature: resty.core.shdict: added new methods shdict:free_space() and shdict:capacity(). thanks Hiroaki Nakamura for the patch. * feature: implemented the ngx.re.gmatch function with FFI. thanks spacewander for the patch. * bugfix: ngx.re: fix an edge-case where re.split() might not destroy compiled regexes. thanks Thibault Charbonnier for the patch. * feature: implemented the shdict:ttl() and shdict:expire() API functions using FFI. * upgraded lua-resty-dns to 0.20. * feature: allows "RRTYPE" values larger than 255. thanks Peter Wu for the patch. * upgraded lua-resty-limit-traffic to 0.05. * feature: added new module resty.limit.count for GitHub API style request count limiting. thanks Ke Zhu for the original patch and Ming Wen for the followup tweaks. * bugfix: resty.limit.traffic: we might not uncommit previous limiters if a limiter got rejected while committing a state. thanks Thibault Charbonnier for the patch. * bugfix: resty.limit.conn: we incorrectly specified the exceeded connection count as the initial value for the shdict key decrement which may lead to dead locks when the key has been evicted in very busy systems. thanks bug had appeared in v0.04. * upgraded resty-cli to 0.20. * feature: resty: impelented the "-j off" option to turn off the JIT compiler. * feature: resty: implemented the "-j v" and "-j dump" options similar to luajit's. * feature: resty: added new command-line option "-l LIB" to mimic lua and luajit -l parameter. thanks Michal Cichra for the patch. * bugfix: resty: handle "SIGPIPE" ourselves by simply killing the process. thanks Ingy dot Net for the report. * bugfix: resty: hot looping Lua scripts failed to respond to the "INT" signal. * upgraded opm to 0.0.4. * bugfix: opm: when curl uses HTTP/2 by default opm would complain about "bad response status line received". thanks Donal Byrne and Andrew Redden for the report. * debug: opm: added more details in the "bad response status line received from server" error. * upgraded ngx_headers_more to 0.33. * feature: add wildcard match support for more_clear_input_headers. * doc: fixed more_clear_input_headers usage examples. thanks Daniel Paniagua for the patch. * upgraded ngx_encrypted_session to 0.07. * bugfix: fixed one potential memory leak in an error condition. thanks dyu for the patch. * upgraded ngx_rds_json to 0.15. * bugfix: fixed warnings with C compilers without variadic macro support. * doc: added context info for all the config directives. * upgraded ngx_rds_csv to 0.08. * tests: varous changes in the test suite. * upgraded LuaJIT to v2.1-20171103: https://github.com/openresty/luajit2/tags * optimize: use more appressive JIT compiler parameters as the default to help large OpenResty Lua apps. We now use the following jit.opt defaults: "maxtrace=8000 maxrecord=16000 minstitch=3 maxmcode=40960 -- in KB". * imported Mike Pall's latest changes: * "LJ_GC64": Make "ASMREF_L" references 64 bit. * "LJ_GC64": Fix ir_khash for non-string GCobj. * DynASM/x86: Fix potential "REL_A" overflow. Thanks to Joshua Haberman. * MIPS64: Hide internal function. * x64/"LJ_GC64": Fix type-check-only variant of SLOAD. Thanks to Peter Cawley. * PPC: Add soft-float support to JIT compiler backend. Contributed by Djordje Kovacevic and Stefan Pejic from RT-RK.com. Sponsored by Cisco Systems, Inc. * x64/"LJ_GC64": Fix fallback case of "asm_fuseloadk64()". Contributed by Peter Cawley. The HTML version of the change log with lots of helpful hyper-links can be browsed here: https://openresty.org/en/changelog-1013006.html OpenResty is a full-fledged web platform by bundling the standard Nginx core, Lua/LuaJIT, lots of 3rd-party Nginx modules and Lua libraries, as well as most of their external dependencies. See OpenResty's homepage for details: https://openresty.org/ We have run extensive testing on our Amazon EC2 test cluster and ensured that all the components (including the Nginx core) play well together. The latest test report can always be found here: https://qa.openresty.org/ We also always run our OpenResty Edge commercial software based on the latest open source version of OpenResty in our own global CDN network (dubbed "mini CDN") powering our openresty.org and openresty.com websites. See https://openresty.com/ for more details. Have fun! Best regards, Yichun --- President & CEO of OpenResty Inc. From ganesh.1.kumar at gm.com Wed Nov 15 18:48:28 2017 From: ganesh.1.kumar at gm.com (Ganesh Kumar) Date: Wed, 15 Nov 2017 18:48:28 +0000 Subject: Question about reverse proxy for Prometheus Message-ID: <6382201ad5184bf0a18e4f95b2872855@gm.com> Hi, How do I setup reverse proxy for an application like Prometheus/Node-exporter which is running on port 9100 and setup TLS. The issue I am facing is when I update the "listen parameter in the block to port 9090", I am NOT able to start the Node-exporter process as Nginx seems to be blocking this port. I want to redirect all the traffic on port 9100 and reroute it thru https. Ganesh Nothing in this message is intended to constitute an electronic signature unless a specific statement to the contrary is included in this message. Confidentiality Note: This message is intended only for the person or entity to which it is addressed. It may contain confidential and/or privileged material. Any review, transmission, dissemination or other use, or taking of any action in reliance upon this message by persons or entities other than the intended recipient is prohibited and may be unlawful. If you received this message in error, please contact the sender and delete it from your computer. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hemelaar at desikkel.nl Wed Nov 15 18:54:45 2017 From: hemelaar at desikkel.nl (Jean-Paul Hemelaar) Date: Wed, 15 Nov 2017 19:54:45 +0100 Subject: Different Naxsi rulesets Message-ID: Hi, With help from the Naxsi maillist I found that my idea is indeed not possible. Naxsi doesn't process subrequests, so that's why it didn't work as I expected. It seems to be on the roadmap to change this behavior. My workaround for now it to move the two rulesets into different server blocks in Nginx: Serverblock 1 listening on port 8080 makes the decision to send the request to the strict or not-strict Naxsi Serverblock 2 listening on port 8081 applies the strict rules Serverblock 3 listening on port 8082 applies the less-strict rules This works! Thanks for your help, JP On Mon, Nov 13, 2017 at 8:30 PM, Aziz Rozyev wrote: > hello, > > how about logs? does naxisi provide any variables that can be monitored? > > so far it seems that your rules in ?strict|relaxed? are not triggering, > the ?default? > one will always hit (as expected), as it?s first location ?/? from where > you route to other 2 locations. > > also, try to log in debug mode, may be that will give more insights. > > br, > Aziz. > > > > > > > On 13 Nov 2017, at 21:47, Jean-Paul Hemelaar > wrote: > > > > Hi, > > > > I have updated the config to use 'map' instead of the if-statements. > That's indeed a better way. > > The problem however remains: > > > > - Naxsi mainrules are in the http-block > > - Config similar to: > > > > map $geoip_country_code $ruleSetCC { > > default "strict"; > > CC1 "relaxed"; > > CC2 "relaxed"; > > } > > > > location /strict/ { > > include /usr/local/nginx/naxsi.rules.strict; > > > > proxy_pass http://app-server/; > > } > > > > location /relaxed/ { > > include /usr/local/nginx/naxsi.rules.relaxed; > > > > proxy_pass http://app-server/; > > } > > > > location / { > > include /usr/local/nginx/naxsi.rules.default; > > > > set $ruleSet $ruleSetCC; > > rewrite ^(.*)$ /$ruleSet$1 last; > > } > > > > > > It's always using naxsi.rules.default. If this line is removed it's not > using any rules (pass-all). > > > > Thanks so far! > > > > JP > > > > > > > > > > > > On Mon, Nov 13, 2017 at 2:14 PM, Aziz Rozyev wrote: > > At first glance config looks correct, so probably it?s something with > naxi rulesets. > > Btw, why don?t you use maps? > > > > map $geoip_coutnry_code $strictness { > > default ?strict"; > > CC_1 ?not-so-strict"; > > CC_2 ?not-so-strict"; > > # .. more country codes; > > } > > > > # strict and not-so-strict locations > > > > map $strictness $path { > > "strict? "/strict/"; > > "not-so-strict? "/not-so-strict/?; > > } > > > > location / { > > return 302 $path; > > # .. > > } > > > > > > br, > > Aziz. > > > > > > > > > > > > > On 12 Nov 2017, at 14:03, Jean-Paul Hemelaar > wrote: > > > > > > T THIS WORKS: > > > # include /usr/local/n > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Nov 15 22:34:35 2017 From: nginx-forum at forum.nginx.org (michaldejmek) Date: Wed, 15 Nov 2017 17:34:35 -0500 Subject: Nginx reverse proxy with Sharepoint web In-Reply-To: References: Message-ID: Can you add your configuration file ..? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277165,277340#msg-277340 From nginx-forum at forum.nginx.org Thu Nov 16 09:42:04 2017 From: nginx-forum at forum.nginx.org (kalijsola) Date: Thu, 16 Nov 2017 04:42:04 -0500 Subject: ry efficient ayurvedic solutions to enhance vars performance-related power. Shilajit ES Message-ID: <52e64fee58b1e4ff293c4cde82d3deed.NginxMailingListEnglish@forum.nginx.org> to t products, excellent outcomes by using this remedy. Overview the product: Mast Emotions capsules: This is one vars performance the efficient organic choices which are absolutely prepared with natural herbs. Individuals may use this tablets to enhance vars performance-related power. The various components vars performance Mast Emotions products are ashwagandha, abhrak bhasma, lauh bhasma, sudh shilajit, semal musliFree Reprint Content, safed musli and kaunch and ras sindoor. Experts recommend the an all-natural aphrodisiac which has the nickname, Indian The glowing blue ta [url=https://evaherbalist.com/vars-performance/]vars performance[/url] let. This is because it can improve vars performance-related performing vars performance our individual human body and enhance our vars performance produce or vars performance-related listlessness. It is also efficient in enhancing sperm mobile cell infertility or dealing with men construction problems. These products also have anti-oxidant functions which get them to on the best as well as extremely efficient organic choices to any type vars performance vars performance-related listlessness all the way from inducing a higher timespan and better hardons to male impotence and men construction problems. These products are great for relieving pressure as well and can enhance emotional wellness and fitness and wellness too. With so several benefits, it?s not obscure why these products cure so many vars performance-related complications so easily. There are many places where you can buy shilajit products on the world wide web. What are you waiting for? One vars performance the best places to buy shilajit products online is dharmanis.com They are one vars performance the main manufacturers vars performance shilajit products in India and can definitely vars performancefer you with the item you are looking for. SometimesFeature Content, vars performance-related listlessness may occur because vars performance the malfunctioning vars performance the renal or urinary human body places. Even the development or swelling vars performance the prostate gland can lead to very uncomfortable encounters during vars performance. It also causes excellent difficulty in urination. Shilajit products are an effective organic therapy for these problems as well. It can even enhance kidney performing to a certain extent. Vars performance-related Intimacy is an ... that blows us all away at first. As a ... proceeds however, the ... fire tends to fade and burn out. If you figure out out how to adapt to the new levels in your re Vars performance Intimacy is a working encounter that blows us all away at first. As a relationship proceeds however, the passionate fire tends to fade and burn out. If you figure out out how to adapt to the new levels in your connections, you can keep your passion fire burning and notice a vars performance-related greatness you never thought you could accomplish Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277342,277342#msg-277342 From brentgclarklist at gmail.com Thu Nov 16 13:03:49 2017 From: brentgclarklist at gmail.com (Brent Clark) Date: Thu, 16 Nov 2017 15:03:49 +0200 Subject: Nginx drops server our LB if it see HTTP 400. Message-ID: <4d0794a8-70b4-0a1c-41c4-c1e780db3c1e@gmail.com> Good day Guys I'm sitting with a very peculiar problem, and I was hoping someone could be of assistance. Right now everything is a theory, but when I switch back to LVS everything works. Reason I like and want Nginx is for the reverse caching (caching of images). As said, Im using Nginx for reverse caching and load balancing, and I'm seeing the following in the Nginx error log. 2017/11/16 10:16:38 [error] 75140#75140: *27952 no live upstreams while connecting to upstream, client: 52.169.148.4, server: REMOVEDCLIENTDOMAIN, request: "GET /1298310/SNIPPET_OF_URL HTTP/1.1", upstream: "https://sslloadbalance/1298310/SNIPPET_OF_URL", host: "REMOVEDCLIENTDOMAIN". I understand that, Nginx says is cant connect to the backend servers, but the backend servers are 100%. My theory is, when ever a URL is called and an HTTP 400 is returned, Nginx picks this up, and does not like it, and then drops the server out., e.g. REMOVED_IP_OF_LB - - [16/Nov/2017:10:16:38 +0200] "GET /wp-admin/admin-ajax.php?action=yop_poll_load_js&id=-1&location=page&unique_id=_yp5a0d4965c79ac&ver=5.5 HTTP/1.0" 400 226 "-" "-" My configuration(s) is very standard / basic. https://pastebin.com/B6rFwnb6 https://pastebin.com/wsdGPC74 I would have liked to have used 'health_check', but that is only available in Nginx Plus. If I run a tcptraceroute to port 80 in a while loop to the back end servers, everything is ok, and going back to LVS, everything is ok. I would like to ask, If is a way to either ignore 400 error status or if I can ask, what is a better way to manage and handle this. Many thanks, regards Brent From mdounin at mdounin.ru Thu Nov 16 16:04:40 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 16 Nov 2017 19:04:40 +0300 Subject: Nginx drops server our LB if it see HTTP 400. In-Reply-To: <4d0794a8-70b4-0a1c-41c4-c1e780db3c1e@gmail.com> References: <4d0794a8-70b4-0a1c-41c4-c1e780db3c1e@gmail.com> Message-ID: <20171116160440.GC26836@mdounin.ru> Hello! On Thu, Nov 16, 2017 at 03:03:49PM +0200, Brent Clark wrote: > Good day Guys > > I'm sitting with a very peculiar problem, and I was hoping someone could > be of assistance. > > Right now everything is a theory, but when I switch back to LVS > everything works. Reason I like and want Nginx is for the reverse > caching (caching of images). > > As said, Im using Nginx for reverse caching and load balancing, and I'm > seeing the following in the Nginx error log. > > 2017/11/16 10:16:38 [error] 75140#75140: *27952 no live upstreams while > connecting to upstream, client: 52.169.148.4, server: > REMOVEDCLIENTDOMAIN, request: "GET /1298310/SNIPPET_OF_URL HTTP/1.1", > upstream: "https://sslloadbalance/1298310/SNIPPET_OF_URL", host: > "REMOVEDCLIENTDOMAIN". > > I understand that, Nginx says is cant connect to the backend servers, > but the backend servers are 100%. It says that all the backends configured previously failed and not allowed to serve requests due to max_fails / fail_timeout configured. Look for previously reported errors to find out exact errors, and take a look at your proxy_next_upstream configuration in case it's not the default one. More information is here: http://nginx.org/r/proxy_next_upstream http://nginx.org/en/docs/http/ngx_http_upstream_module.html#max_fails http://nginx.org/en/docs/http/ngx_http_upstream_module.html#fail_timeout > My theory is, when ever a URL is called and an HTTP 400 is returned, > Nginx picks this up, and does not like it, and then drops the server > out., e.g. > > REMOVED_IP_OF_LB - - [16/Nov/2017:10:16:38 +0200] "GET > /wp-admin/admin-ajax.php?action=yop_poll_load_js&id=-1&location=page&unique_id=_yp5a0d4965c79ac&ver=5.5 > HTTP/1.0" 400 226 "-" "-" No, the theory isn't correct. Responses with status code 400 has nothing to do with the above error. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Thu Nov 16 20:26:06 2017 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Thu, 16 Nov 2017 15:26:06 -0500 Subject: Nginx optimal speed in limit_rate for video streams Message-ID: So when dealing with mp4 etc video streams what is the best speed to send / transfer files to people that does not cause delays in latency / lagging on the video due etc. My current : location /video/ { mp4; limit_rate_after 1m; limit_rate 1m; } On other sites when i download / watch videos it seems they transfer files at speeds of 200k/s Should I lower my rates ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277352,277352#msg-277352 From lucas at lucasrolff.com Thu Nov 16 20:58:55 2017 From: lucas at lucasrolff.com (Lucas Rolff) Date: Thu, 16 Nov 2017 20:58:55 +0000 Subject: Nginx optimal speed in limit_rate for video streams In-Reply-To: References: Message-ID: Depends on your bitrate of your movies as well, it will be hard to play 4K videos on 1 megabit/s Get Outlook for iOS ________________________________ From: nginx on behalf of c0nw0nk Sent: Thursday, November 16, 2017 9:26:06 PM To: nginx at nginx.org Subject: Nginx optimal speed in limit_rate for video streams So when dealing with mp4 etc video streams what is the best speed to send / transfer files to people that does not cause delays in latency / lagging on the video due etc. My current : location /video/ { mp4; limit_rate_after 1m; limit_rate 1m; } On other sites when i download / watch videos it seems they transfer files at speeds of 200k/s Should I lower my rates ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277352,277352#msg-277352 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdocter at gmail.com Thu Nov 16 20:59:49 2017 From: rdocter at gmail.com (Ruben) Date: Thu, 16 Nov 2017 21:59:49 +0100 Subject: Nginx dynamic proxy_pass keeps redirecting to wrong domain Message-ID: I am using the following config: http { server { listen 80; location / { resolver 127.0.0.11; auth_request /auth; auth_request_set $instance $upstream_http_x_instance; proxy_pass http://$instance; } location = /auth { internal; proxy_pass http://auth; proxy_pass_request_body off; proxy_set_header Content-Length ""; proxy_set_header X-Original-URI $request_uri; } } } I want to auth all routes (location /) to this server. It is a content server. The proxy_pass http://auth; call does the real authentication and is a Go Lang server. The response in this requests sets also a header X-Instance. Which reflects a name of a docker service, for example instance-001. If authentication succeeds auth_request_set is set with the value of the header X-Instance for example instance-001. Now I want to serve content from this instance by utilizing proxy_pass http://$instance;. Now I have read a lot about dynamic proxy_pass and what to do, but nothing succeeds. The problem is, when I go to http://example.com/cdn/test/test.jpg in the browser, it redirects me to http://instance-001/cdn/test/test.jpg. Which is ofcourse not correct. It should proxy the docker service with name instance-001. I have looked into proxy_redirect but for me it isn't clear how to set it correctly. I also tries a rewrite like rewrite ^(/.*) $1 break; in location = /auth. But still have the annoying redirect to http://instance-001/cdn/test/test.jpg. I've been struggling with this for a very long time now and I also can't find a solid solution. -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu Nov 16 22:56:06 2017 From: francis at daoine.org (Francis Daly) Date: Thu, 16 Nov 2017 22:56:06 +0000 Subject: Nginx dynamic proxy_pass keeps redirecting to wrong domain In-Reply-To: References: Message-ID: <20171116225606.GI3127@daoine.org> On Thu, Nov 16, 2017 at 09:59:49PM +0100, Ruben wrote: Hi there, > location / { > proxy_pass http://$instance; > } > The problem is, when I go to http://example.com/cdn/test/test.jpg in the > browser, it redirects me to http://instance-001/cdn/test/test.jpg. Which is > ofcourse not correct. It should proxy the docker service with name > instance-001. Are you reporting that nginx creates the redirect, or that nginx passes along the redirect that the upstream web server creates? The two cases probably have different fixes. What do the (nginx and upstream web server) logs say? f -- Francis Daly francis at daoine.org From rdocter at gmail.com Fri Nov 17 03:48:04 2017 From: rdocter at gmail.com (Ruben D) Date: Fri, 17 Nov 2017 04:48:04 +0100 Subject: Nginx dynamic proxy_pass keeps redirecting to wrong domain In-Reply-To: <20171116225606.GI3127@daoine.org> References: <20171116225606.GI3127@daoine.org> Message-ID: <4F541334-84B0-445A-8042-542B893FD4FD@gmail.com> In my browser bar I see the address beginning with instance-001. While I expect not to have a redirect and just see http://example.com/etc... Is that what you mean? Verstuurd vanaf mijn iPhone > Op 16 nov. 2017 om 23:56 heeft Francis Daly het volgende geschreven: > > On Thu, Nov 16, 2017 at 09:59:49PM +0100, Ruben wrote: > > Hi there, > >> location / { >> proxy_pass http://$instance; >> } > >> The problem is, when I go to http://example.com/cdn/test/test.jpg in the >> browser, it redirects me to http://instance-001/cdn/test/test.jpg. Which is >> ofcourse not correct. It should proxy the docker service with name >> instance-001. > > Are you reporting that nginx creates the redirect, or that nginx passes > along the redirect that the upstream web server creates? > > The two cases probably have different fixes. > > What do the (nginx and upstream web server) logs say? > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From peter_booth at me.com Fri Nov 17 03:55:26 2017 From: peter_booth at me.com (Peter Booth) Date: Thu, 16 Nov 2017 22:55:26 -0500 Subject: Nginx dynamic proxy_pass keeps redirecting to wrong domain In-Reply-To: <4F541334-84B0-445A-8042-542B893FD4FD@gmail.com> References: <20171116225606.GI3127@daoine.org> <4F541334-84B0-445A-8042-542B893FD4FD@gmail.com> Message-ID: <0A4BC312-B63C-4DF1-A31D-7DB1B81351AD@me.com> You need to understand, step-by-stp, exactly what is happening. Here is one (of many) ways to do this: 1. Open the Chrome browser 2. Right click on the background and select inspect, this will open the developer tools page 3. Select the tab ?Network? which shows you the HTTp requests issued for the current page. 4. Select the check-box preserve log which means that prior pages will still be shown here when you are redirected to another page 5. Now open the URL in question. You will see the response from nginx which should, if you are correctly reporting what happens have a 301 or 302 code alternately you can do this from a linux or OS X command line using curl or wget > On Nov 16, 2017, at 10:48 PM, Ruben D wrote: > > In my browser bar I see the address beginning with instance-001. While I expect not to have a redirect and just see http://example.com/etc... > > Is that what you mean? > > Verstuurd vanaf mijn iPhone > >> Op 16 nov. 2017 om 23:56 heeft Francis Daly het volgende geschreven: >> >> On Thu, Nov 16, 2017 at 09:59:49PM +0100, Ruben wrote: >> >> Hi there, >> >>> location / { >>> proxy_pass http://$instance; >>> } >> >>> The problem is, when I go to http://example.com/cdn/test/test.jpg in the >>> browser, it redirects me to http://instance-001/cdn/test/test.jpg. Which is >>> ofcourse not correct. It should proxy the docker service with name >>> instance-001. >> >> Are you reporting that nginx creates the redirect, or that nginx passes >> along the redirect that the upstream web server creates? >> >> The two cases probably have different fixes. >> >> What do the (nginx and upstream web server) logs say? >> >> f >> -- >> Francis Daly francis at daoine.org >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Fri Nov 17 06:49:45 2017 From: nginx-forum at forum.nginx.org (edanic0017) Date: Fri, 17 Nov 2017 01:49:45 -0500 Subject: why is proxy_cache_lock not working? Message-ID: <1ebded3bcc622fbea494d8ad053c2151.NginxMailingListEnglish@forum.nginx.org> I am running nginx server in front of httpd server. nginx -V nginx version: nginx/1.10.2 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-17) (GCC) built with OpenSSL 1.0.1e-fips 11 Feb 2013 TLS SNI support enabled configure arguments: --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/tmp/client_body --http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/var/run/nginx.pid --lock-path=/var/lock/subsys/nginx --user=nginx --group=nginx --with-file-aio --with-ipv6 --with-http_ssl_module --with-http_v2_module --with-http_realip_module --with-http_addition_module --with-http_xslt_module=dynamic --with-http_image_filter_module=dynamic --with-http_geoip_module=dynamic --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_degradation_module --with-http_slice_module --with-http_stub_status_module --with-http_perl_module=dynamic --with-mail=dynamic --with-mail_ssl_module --with-pcre --with-pcre-jit --with-stream=dynamic --with-stream_ssl_module --with-debug --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --with-ld-opt=' -Wl,-E' Below are the parts of the nginx configuration proxy_cache_path /tmp/nginx/cache levels=1:2 keys_zone=my_cache:1m max_size=10g inactive=30s use_temp_path=off; location ~*\.(ts)$ { ##### Proxy cache settings proxy_http_version 1.1; proxy_cache my_cache; proxy_cache_revalidate on; proxy_cache_key $uri; proxy_cache_use_stale updating; proxy_cache_valid any 1m; proxy_cache_min_uses 1; proxy_cache_lock on; proxy_cache_lock_age 5s; proxy_cache_lock_timeout 1h; proxy_ignore_headers X-Accel-Expires Expires Cache-Control; proxy_ignore_headers Set-Cookie; proxy_pass http://192.168.2.225:8080; } Even with proxy_cache_lock on, the original server is getting hit on every miss request per the logs below. nginx access logs: HOST82 - MISS [17/Nov/2017:00:22:03 -0600] "GET profile1/76861/HD_profile1_00001.ts HTTP/1.1" 206 262144 "-" "Roku/DVP-8.0 (298.00E04108A)" "-" HOST225 - MISS [17/Nov/2017:00:22:04 -0600] "GET profile1/76861/HD_profile1_00001.ts HTTP/1.1" 206 262144 "-" "Roku/DVP-8.0 (508.00E03138A)" "-" HOST187 - MISS [17/Nov/2017:00:22:05 -0600] "GET profile1/76861/HD_profile1_00001.ts HTTP/1.1" 200 4696992 "-" "Roku/DVP-7.70 (047.70E04135A)" "-" HOST125 - MISS [17/Nov/2017:00:22:06 -0600] "GET profile1/76861/HD_profile1_00001.ts HTTP/1.1" 206 262144 "-" "Roku/DVP-8.0 (248.00E04108A)" "-" HOST80 - MISS [17/Nov/2017:00:22:08 -0600] "GET profile1/76861/HD_profile1_00001.ts HTTP/1.1" 206 262144 "-" "Roku/DVP-8.0 (288.00E04108A)" "-" HOST80 - MISS [17/Nov/2017:00:22:15 -0600] "GET profile1/76861/HD_profile1_00002.ts HTTP/1.1" 206 262144 "-" "Roku/DVP-8.0 (288.00E04108A)" "-" HOST82 - MISS [17/Nov/2017:00:22:16 -0600] "GET profile1/76861/HD_profile1_00002.ts HTTP/1.1" 206 262144 "-" "Roku/DVP-8.0 (298.00E04108A)" "-" HOST225 - MISS [17/Nov/2017:00:22:18 -0600] "GET profile1/76861/HD_profile1_00002.ts HTTP/1.1" 206 262144 "-" "Roku/DVP-8.0 (508.00E03138A)" "-" HOST187 - MISS [17/Nov/2017:00:22:19 -0600] "GET profile1/76861/HD_profile1_00002.ts HTTP/1.1" 200 4527604 "-" "Roku/DVP-7.70 (047.70E04135A)" "-" Origin httpd logs 192.168.2.226 - - [17/Nov/2017:06:29:40 +0000] "GET profile1/76861/HD_profile1_00001.ts HTTP/1.1" 200 4696992 192.168.2.226 - - [17/Nov/2017:06:29:41 +0000] "GET profile1/76861/HD_profile1_00001.ts HTTP/1.1" 200 4696992 192.168.2.226 - - [17/Nov/2017:06:29:42 +0000] "GET profile1/76861/HD_profile1_00001.ts HTTP/1.1" 200 4696992 192.168.2.226 - - [17/Nov/2017:06:29:43 +0000] "GET profile1/76861/HD_profile1_00001.ts HTTP/1.1" 200 4696992 192.168.2.226 - - [17/Nov/2017:06:29:45 +0000] "GET profile1/76861/HD_profile1_00001.ts HTTP/1.1" 200 4696992 192.168.2.226 - - [17/Nov/2017:06:29:52 +0000] "GET profile1/76861/HD_profile1_00002.ts HTTP/1.1" 200 4527604 192.168.2.226 - - [17/Nov/2017:06:29:53 +0000] "GET profile1/76861/HD_profile1_00002.ts HTTP/1.1" 200 4527604 192.168.2.226 - - [17/Nov/2017:06:29:55 +0000] "GET profile1/76861/HD_profile1_00002.ts HTTP/1.1" 200 4527604 192.168.2.226 - - [17/Nov/2017:06:29:56 +0000] "GET profile1/76861/HD_profile1_00002.ts HTTP/1.1" 200 4527604 192.168.2.226 - - [17/Nov/2017:06:29:56 +0000] "GET profile1/76861/HD_profile1_00002.ts HTTP/1.1" 200 4527604 192.168.2.226 - - [17/Nov/2017:06:30:03 +0000] "GET profile1/76861/HD_profile1_00003.ts HTTP/1.1" 200 4866004 192.168.2.226 - - [17/Nov/2017:06:30:05 +0000] "GET profile1/76861/HD_profile1_00003.ts HTTP/1.1" 200 4866004 192.168.2.226 - - [17/Nov/2017:06:30:05 +0000] "GET profile1/76861/HD_profile1_00003.ts HTTP/1.1" 200 4866004 192.168.2.226 - - [17/Nov/2017:06:30:07 +0000] "GET profile1/76861/HD_profile1_00003.ts HTTP/1.1" 200 4866004 192.168.2.226 - - [17/Nov/2017:06:30:08 +0000] "GET profile1/76861/HD_profile1_00003.ts HTTP/1.1" 200 4866004 What am I missing? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277358,277358#msg-277358 From francis at daoine.org Fri Nov 17 07:53:29 2017 From: francis at daoine.org (Francis Daly) Date: Fri, 17 Nov 2017 07:53:29 +0000 Subject: Nginx dynamic proxy_pass keeps redirecting to wrong domain In-Reply-To: <4F541334-84B0-445A-8042-542B893FD4FD@gmail.com> References: <20171116225606.GI3127@daoine.org> <4F541334-84B0-445A-8042-542B893FD4FD@gmail.com> Message-ID: <20171117075329.GJ3127@daoine.org> On Fri, Nov 17, 2017 at 04:48:04AM +0100, Ruben D wrote: Hi there, > In my browser bar I see the address beginning with instance-001. While I expect not to have a redirect and just see http://example.com/etc... That means that when your browser asks nginx for http://example.com/etc, nginx sends it back a HTTP redirect to http://instance-001/etc. My question is: is it the instance-001 web server that sends that redirect to nginx before nginx sends it to your browser? The instance-001 web server logs should show what it does. Set up a quiet system. Make one http request. Report what the logs say. f -- Francis Daly francis at daoine.org From rdocter at gmail.com Fri Nov 17 13:16:41 2017 From: rdocter at gmail.com (Ruben) Date: Fri, 17 Nov 2017 14:16:41 +0100 Subject: Nginx dynamic proxy_pass keeps redirecting to wrong domain In-Reply-To: <20171117075329.GJ3127@daoine.org> References: <20171116225606.GI3127@daoine.org> <4F541334-84B0-445A-8042-542B893FD4FD@gmail.com> <20171117075329.GJ3127@daoine.org> Message-ID: The logs of instance-001 show that a GET request is made at instance-001 and that it is a 301 redirect. instance-001 log when making the request: "GET /cdn/test/test.jpg HTTP/1.0" 301 185 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.94 Safari/537.36" 2017-11-17 8:53 GMT+01:00 Francis Daly : > On Fri, Nov 17, 2017 at 04:48:04AM +0100, Ruben D wrote: > > Hi there, > > > In my browser bar I see the address beginning with instance-001. While I > expect not to have a redirect and just see http://example.com/etc... > > That means that when your browser asks nginx for http://example.com/etc, > nginx sends it back a HTTP redirect to http://instance-001/etc. > > My question is: is it the instance-001 web server that sends that redirect > to nginx before nginx sends it to your browser? The instance-001 web > server logs should show what it does. > > Set up a quiet system. > > Make one http request. > > Report what the logs say. > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Fri Nov 17 14:02:15 2017 From: francis at daoine.org (Francis Daly) Date: Fri, 17 Nov 2017 14:02:15 +0000 Subject: Nginx dynamic proxy_pass keeps redirecting to wrong domain In-Reply-To: References: <20171116225606.GI3127@daoine.org> <4F541334-84B0-445A-8042-542B893FD4FD@gmail.com> <20171117075329.GJ3127@daoine.org> Message-ID: <20171117140215.GK3127@daoine.org> On Fri, Nov 17, 2017 at 02:16:41PM +0100, Ruben wrote: Hi there, > instance-001 log when making the request: > > "GET /cdn/test/test.jpg HTTP/1.0" 301 185 "-" "Mozilla/5.0 (Windows NT > 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) > Chrome/62.0.3202.94 Safari/537.36" Ok, that's good to know. The client asks nginx for /cdn/test/test.jpg (with a Host: header of example.com); nginx asks upstream for /cdn/test/test.jpg (with a Host: header that we have not yet confirmed); upstream says to ask for /cdn/test/test.jpg with a Host: header of instance-001. One possibility is to change the web server on instance-001 to serve the content to all requests, no matter what Host: header they use. That is a bit unfriendly to instance-001. Another possibility is to make sure than nginx sends a Host: header of instance-001 when it make a request of instance-001. That should need no changes to instance-001. You could check fuller logs, or tcpdump the traffic, to see what Host: header nginx currently sends to instance-001. But if you don't really care, you can just configure nginx to send what you want. (Yet another possibility is to change the web server on instance-001 to serve the instance-001 content whenever it gets the Host: header that nginx currently sends. That only works if you know what nginx currently sends.) The second choice above is the one that is within nginx's control. So (assuming that you have no proxy_set_header values already configured at server{} or http{} level), in the same place where you currently have proxy_pass http://$instance; add proxy_set_header Host $instance; and see what changes. If it doesn't all Just Work, then you may need to see what exactly nginx is sending; but hopefully it won't come to that. Note: I think that this should not be necessary; since I think that nginx should probably already be setting the Host header to the value of $instance. But obviously something is going wrong, so I think that it is worth being explicit. Good luck with it, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Fri Nov 17 14:54:42 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 17 Nov 2017 17:54:42 +0300 Subject: why is proxy_cache_lock not working? In-Reply-To: <1ebded3bcc622fbea494d8ad053c2151.NginxMailingListEnglish@forum.nginx.org> References: <1ebded3bcc622fbea494d8ad053c2151.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20171117145442.GF26836@mdounin.ru> Hello! On Fri, Nov 17, 2017 at 01:49:45AM -0500, edanic0017 wrote: [...] > Below are the parts of the nginx configuration > > > proxy_cache_path /tmp/nginx/cache > levels=1:2 > keys_zone=my_cache:1m > max_size=10g > inactive=30s > use_temp_path=off; > > location ~*\.(ts)$ { > > ##### Proxy cache settings > > proxy_http_version 1.1; > proxy_cache my_cache; > proxy_cache_revalidate on; > proxy_cache_key $uri; > proxy_cache_use_stale updating; > proxy_cache_valid any 1m; > proxy_cache_min_uses 1; > proxy_cache_lock on; > proxy_cache_lock_age 5s; > proxy_cache_lock_timeout 1h; > proxy_ignore_headers X-Accel-Expires Expires Cache-Control; > proxy_ignore_headers Set-Cookie; > > > proxy_pass http://192.168.2.225:8080; > } > > > Even with proxy_cache_lock on, the original server is getting hit on every > miss request per the logs below. > > nginx access logs: > > HOST82 - MISS [17/Nov/2017:00:22:03 -0600] "GET > profile1/76861/HD_profile1_00001.ts HTTP/1.1" 206 262144 "-" "Roku/DVP-8.0 > (298.00E04108A)" "-" > HOST225 - MISS [17/Nov/2017:00:22:04 -0600] "GET > profile1/76861/HD_profile1_00001.ts HTTP/1.1" 206 262144 "-" "Roku/DVP-8.0 > (508.00E03138A)" "-" > HOST187 - MISS [17/Nov/2017:00:22:05 -0600] "GET > profile1/76861/HD_profile1_00001.ts HTTP/1.1" 200 4696992 "-" "Roku/DVP-7.70 > (047.70E04135A)" "-" > HOST125 - MISS [17/Nov/2017:00:22:06 -0600] "GET > profile1/76861/HD_profile1_00001.ts HTTP/1.1" 206 262144 "-" "Roku/DVP-8.0 > (248.00E04108A)" "-" > HOST80 - MISS [17/Nov/2017:00:22:08 -0600] "GET > profile1/76861/HD_profile1_00001.ts HTTP/1.1" 206 262144 "-" "Roku/DVP-8.0 > (288.00E04108A)" "-" Unless the response takes many seconds (unlikely for a 5 megabyte file, use $upstream_response_time to be sure), MISS in both 00:22:03 and 00:22:08 lines indicate that the response is not cached at all. Given that you already ignore X-Accel-Expires, Expires, Cache-Controli, and Set-Cookie headers in your configuration, the remaining header which may affect caching is Vary. Check if it's in the responses. Also, caching might not work if the response is incorrect (number of bytes returned does not match Content-Length, or a connection is closed with an error), or nginx can't write to cache directory. Check your error log for possible errors reported. There is also a small possibility that caching works, but particular items are almost immediately removed due to lack of resources - given your cache is configured with only 1m keys zone, it is only capable of storing about 8k items. I don't think this is the case though. -- Maxim Dounin http://mdounin.ru/ From rdocter at gmail.com Sat Nov 18 05:51:48 2017 From: rdocter at gmail.com (Ruben D) Date: Sat, 18 Nov 2017 06:51:48 +0100 Subject: Nginx dynamic proxy_pass keeps redirecting to wrong domain In-Reply-To: <20171117140215.GK3127@daoine.org> References: <20171116225606.GI3127@daoine.org> <4F541334-84B0-445A-8042-542B893FD4FD@gmail.com> <20171117075329.GJ3127@daoine.org> <20171117140215.GK3127@daoine.org> Message-ID: <9609E55D-B1DC-41FB-ABF5-D1C66DE4EC9C@gmail.com> Thank you, now it seems to work! Verstuurd vanaf mijn iPhone > Op 17 nov. 2017 om 15:02 heeft Francis Daly het volgende geschreven: > > On Fri, Nov 17, 2017 at 02:16:41PM +0100, Ruben wrote: > > Hi there, > >> instance-001 log when making the request: >> >> "GET /cdn/test/test.jpg HTTP/1.0" 301 185 "-" "Mozilla/5.0 (Windows NT >> 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) >> Chrome/62.0.3202.94 Safari/537.36" > > Ok, that's good to know. > > The client asks nginx for /cdn/test/test.jpg (with a Host: header > of example.com); nginx asks upstream for /cdn/test/test.jpg (with a > Host: header that we have not yet confirmed); upstream says to ask for > /cdn/test/test.jpg with a Host: header of instance-001. > > One possibility is to change the web server on instance-001 to serve the > content to all requests, no matter what Host: header they use. That is > a bit unfriendly to instance-001. > > Another possibility is to make sure than nginx sends a Host: header > of instance-001 when it make a request of instance-001. That should need > no changes to instance-001. > > You could check fuller logs, or tcpdump the traffic, to see what Host: > header nginx currently sends to instance-001. But if you don't really > care, you can just configure nginx to send what you want. > > (Yet another possibility is to change the web server on instance-001 > to serve the instance-001 content whenever it gets the Host: header > that nginx currently sends. That only works if you know what nginx > currently sends.) > > The second choice above is the one that is within nginx's control. So > (assuming that you have no proxy_set_header values already configured > at server{} or http{} level), in the same place where you currently have > > proxy_pass http://$instance; > > add > > proxy_set_header Host $instance; > > and see what changes. > > If it doesn't all Just Work, then you may need to see what exactly nginx > is sending; but hopefully it won't come to that. > > Note: I think that this should not be necessary; since I think that > nginx should probably already be setting the Host header to the value > of $instance. But obviously something is going wrong, so I think that > it is worth being explicit. > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Sat Nov 18 09:33:45 2017 From: nginx-forum at forum.nginx.org (edanic0017) Date: Sat, 18 Nov 2017 04:33:45 -0500 Subject: why is proxy_cache_lock not working? In-Reply-To: <20171117145442.GF26836@mdounin.ru> References: <20171117145442.GF26836@mdounin.ru> Message-ID: <073b36722bb78b0e978f55ef4edcc68d.NginxMailingListEnglish@forum.nginx.org> Thanks for your reply. update the proxy_ignore_headers with Vary fixed the issue Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277358,277378#msg-277378 From nginx-forum at forum.nginx.org Sat Nov 18 13:21:59 2017 From: nginx-forum at forum.nginx.org (tamer1009) Date: Sat, 18 Nov 2017 08:21:59 -0500 Subject: Nginx can't reload even when config is OK Message-ID: Hello everybody, I have been working with Nginx for quite a long time, but I have stumbled upon a very strange error. Nginx is compiled from source on CentOS 7. All systemctl commands work, except the reload function. Job for nginx.service failed because the control process exited with error code. See "systemctl status nginx.service" and "journalctl -xe" for details. I have already tried to do the actions manually, which seemed to be working. The /bin/kill -s HUP $MAINPID systemctl status nginx.service -l gives me: nginx.service - The nginx HTTP and reverse proxy server Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled) Active: active (running) (Result: exit-code) since Sat 2017-11-18 08:17:14 EST; 1min 34s ago Process: 22036 ExecReload=/bin/kill -s HUP (code=exited, status=1/FAILURE) Process: 22025 ExecStart=/usr/sbin/nginx (code=exited, status=0/SUCCESS) Process: 22020 ExecStartPre=/usr/sbin/nginx -t (code=exited, status=0/SUCCESS) Process: 22018 ExecStartPre=/usr/bin/rm -f /run/nginx.pid (code=exited, status=0/SUCCESS) Main PID: 22028 (nginx) CGroup: /system.slice/nginx.service 22028 nginx: master process /usr/sbin/ngin 22029 nginx: worker proces Nov 18 08:17:21 host kill[22036]: -s, --signal send specified signal Nov 18 08:17:21 host kill[22036]: -q, --queue use sigqueue(2) rather than kill(2) Nov 18 08:17:21 host kill[22036]: -p, --pid print pids without signaling them Nov 18 08:17:21 host kill[22036]: -l, --list [=] list signal names, or convert one to a name Nov 18 08:17:21 host kill[22036]: -L, --table list signal names and numbers Nov 18 08:17:21 host kill[22036]: -h, --help display this help and exit Nov 18 08:17:21 host kill[22036]: -V, --version output version information and exit Nov 18 08:17:21 host kill[22036]: For more details see kill(1). Nov 18 08:17:21 host systemd[1]: nginx.service: control process exited, code=exited status=1 Nov 18 08:17:21 host systemd[1]: Reload failed for The nginx HTTP and reverse proxy server. nginx -t are passed successfully. The logs don't provide any information. Config is default config. Does anybody know where I have to look now? Thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277379,277379#msg-277379 From nginx-ml at acheronmedia.hr Sat Nov 18 13:59:11 2017 From: nginx-ml at acheronmedia.hr (Vlad K.) Date: Sat, 18 Nov 2017 14:59:11 +0100 Subject: Nginx can't reload even when config is OK In-Reply-To: References: Message-ID: <1de15dbfcd212b199928ac7d7bdf7ed2@acheronmedia.hr> On 2017-11-18 14:21, tamer1009 wrote: > Process: 22036 ExecReload=/bin/kill -s HUP (code=exited, > status=1/FAILURE) Systemd is horribly broken. I've seen this happen on when PIDFile is set in the service file and you have RestrictSystem in some capacity. There's a race condition in systemd that causes the pidfile not to be created. May or may not be your solution, but in those cases removing PIDfile from the service file (and allowing systemd to guess PID for $MAINPID) was the solution. -- Vlad K. From nginx-forum at forum.nginx.org Sun Nov 19 06:08:37 2017 From: nginx-forum at forum.nginx.org (akeshomjeq) Date: Sun, 19 Nov 2017 01:08:37 -0500 Subject: on the planet, no problem who you are or where you are. What is also a spot is that training and getting less meals coul Message-ID: <8c6dbea5fb4f91b679aa1358b22a4e62.NginxMailingListEnglish@forum.nginx.org> d make you get thinner. This isn't too tricky, we all know it, so why don't we just eat less and work out then? Because the thoughts tells you it need meals, the fattier the better, more sugar please and NOW! Why it happens to a lot greenlyte forskolin individuals is still unknown, but all brains are wired dgreenlyte forskolin hoaxferently. The reality is, the thoughts is stopping you from residing living you deserve. Want to know how to fool your brain?There are a couple greenlyte forskolin strate greenlyte forskolin hoax ies to do it. I don't know greenlyte forskolin hoax you've probably observed greenlyte forskolin Pavlov's dog. Pavlov changed the dog's thoughts, the behavior greenlyte forskolin it by greenlyte forskolinfering it jolts greenlyte forskolin electricity whenever the dog wanted to do what he wanted. The dog's thoughts then learned that greenlyte forskolin hoax it was going to do what he wanted, he would get electrocuted. That's one way greenlyte forskolin doing it, but it seems type greenlyte forskolin extreme does not it? I suppose you could have someone present you with an electrical shock whenever you believed greenlyte forskolin extra fat, but that needs a lot greenlyte forskolin persistence and what you don't have 's time, and you definitely don't want the attempt. The other way is the smart way greenlyte forskolin doing it?Have you observed greenlyte forskolin a cactus known as Hoodia Gordonii? Why would you want to know about that? Because it will actually change your everyday way greenlyte forskolin lgreenlyte forskolin hoaxestyle absolutely. It's a place growing in Africa that was discovered duringou are more than likely sick greenlyte forskolin hearing about what will not execute. You have greenlyte forskolinten observed all the reports on how fad weight-loss programs will never help you get thinner. The problem then is what can you do to reduce weight? Here are some recommendations that can certainly make it easier to get rid Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277383,277383#msg-277383 From nginx-forum at forum.nginx.org Mon Nov 20 07:59:39 2017 From: nginx-forum at forum.nginx.org (hook) Date: Mon, 20 Nov 2017 02:59:39 -0500 Subject: confused about ngx_write_file Message-ID: Hi I'm writing some module with using ngx_write_file. I found file's offset is incorrect, in this case: ``` u_char av = 0x01 | 0x04; // file cur offset -> 4096; ngx_write_file(file, &av, 1, 4) ``` file->offset would be 4097, but real offset is 4096. I'm confused with the codes : 192 ssize_t 193 ngx_write_file(ngx_file_t *file, u_char *buf, size_t size, off_t offset) 194 { ........ 221 222 file->offset += n; 223 written += n; 224 225 if ((size_t) n == size) { 226 return written; 227 } 228 229 offset += n; 230 size -= n; 231 } should it be: 222 written += n; 223 offset += n; 224 225 if (offset > file->offset) { 226 file->offset = offset; 227 } 228 229 if ((size_t) n == size) { 230 return written; 231 } 232 233 size -= n; 234 } Thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277386,277386#msg-277386 From sr at inmobile.dk Mon Nov 20 10:33:26 2017 From: sr at inmobile.dk (Stephan Ryer) Date: Mon, 20 Nov 2017 11:33:26 +0100 Subject: Issue with flooded warning and request limiting Message-ID: Hello We are using nginx as a proxy server in front of our IIS servers. We have a client who needs to call us up to 200 times per second. Due to the roundtrip-time, 16 simultanious connections are opened from the client and each connection is used independently to send a https request, wait for x ms and then send again. I have been doing some tests and looked into the throttle logic in the nginx-code. It seems that when setting request limit to 200/sec it is actually interpreted as ?minimum 5ms per call? in the code. If we receive 2 calls at the same time, the warning log will show an ?excess?-message and the call will be delayed to ensure a minimum of 5ms between the calls.. (and if no burst is set, it will be an error message in the log and an error will be returned to the client) We have set burst to 20 meaning, that when our client only sends 1 request at a time per connection, he will never get an error reply from nginx, instead nginx just delays the call. I conclude that this is by design. The issue, however, is that a client using multiple connections naturally often wont be able to time the calls between each connection. And even though our burst has been set to 20, our log is spawned by warning-messages which I do not think should be a warning at all. There is a difference between sending 2 calls at the same time and sending a total of 201 requests within a second, the latter being the only case I would expect to be logged as a warning. Instead of calculating the throttling by simply looking at the last call time and calculate a minimum timespan between last call and current call, I would like the logic to be that nginx keeps a counter of the number of requests withing the current second, and when the second expires and a new seconds exists, the counter Is reset. I know this will actually change the behavior of nginx, so I understand why this is a breaking change if the solution was just to replace the other logic. However, configuring which logic that should be used would be of huge value to us. This will allow us to keep using the warning-log for stuff that should actually be warned and not just for ?10 calls per second which happened to be withing a few milis?. I hope you will read this mail and please let me know If I need to explain something in more details about the issue. --- Med venlig hilsen / Best Regards Stephan Ryer M?ller Partner & CTO inMobile ApS Axel Kiers Vej 18L DK-8270 H?jbjerg Dir. +45 82 82 66 92 E-mail: sr at inmobile.dk Web: www.inmobile.dk Tel: +45 88 33 66 99 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ru at nginx.com Mon Nov 20 11:06:30 2017 From: ru at nginx.com (Ruslan Ermilov) Date: Mon, 20 Nov 2017 14:06:30 +0300 Subject: confused about ngx_write_file In-Reply-To: References: Message-ID: <20171120110630.GD734@lo0.su> On Mon, Nov 20, 2017 at 02:59:39AM -0500, hook wrote: > Hi > I'm writing some module with using ngx_write_file. I found file's > offset is incorrect, in this case: > ``` > u_char av = 0x01 | 0x04; > // file cur offset -> 4096; > ngx_write_file(file, &av, 1, 4) > ``` > file->offset would be 4097, but real offset is 4096. file->offset is only used for sequential read/write. For other cases, its value is undefined. > I'm confused with the codes : > 192 ssize_t > 193 ngx_write_file(ngx_file_t *file, u_char *buf, size_t size, off_t > offset) > 194 { > ........ > 221 > 222 file->offset += n; > 223 written += n; > 224 > 225 if ((size_t) n == size) { > 226 return written; > 227 } > 228 > 229 offset += n; > 230 size -= n; > 231 } > > should it be: > 222 written += n; > 223 offset += n; > 224 > 225 if (offset > file->offset) { > 226 file->offset = offset; > 227 } > 228 > 229 if ((size_t) n == size) { > 230 return written; > 231 } > 232 > 233 size -= n; > 234 } > > > Thanks! > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277386,277386#msg-277386 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Ruslan Ermilov Assume stupidity not malice From dreamwerx at gmail.com Mon Nov 20 11:31:59 2017 From: dreamwerx at gmail.com (DreamWerx) Date: Mon, 20 Nov 2017 12:31:59 +0100 Subject: Issue with AWS NLB and nginx Message-ID: Hi all, I was hoping someone might have an idea here.. I have a number of nginx doing load balancing sitting behind AWS's network load balancers (TCP) - which seem to only support TCP checks. Recently a few have stopped working / frozen - they still seem to accept a tcp connection from the NLB - which leads the health check not to fail. But they cannot internally process the request and you cannot even ssh into the machine. A reboot is required and that takes longer than normal. I think the failure is related to a disk issue since the only error in the entire logs where regarding the disk. (error logs below) Ideally if nginx or the O/S fails it would be better if the port just closed. I've considered writing a small daemon that monitors via http locally and keeps a port open if everything is ok. These machines have been running for months now without any issues until now. Anyone have an idea? Thanks! ---- [4161960.544106] INFO: task jbd2/xvda1-8:271 blocked for more than 120 seconds. [4161960.551035] Not tainted 4.4.0-1022-aws #31-Ubuntu [4161960.556118] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [4161960.562846] INFO: task monit:13224 blocked for more than 120 seconds. [4161960.567394] Not tainted 4.4.0-1022-aws #31-Ubuntu [4161960.571120] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [4162080.576076] INFO: task dhclient:696 blocked for more than 120 seconds. [4162080.579596] Not tainted 4.4.0-1022-aws #31-Ubuntu [4162080.582355] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [4162080.586470] INFO: task monit:13224 blocked for more than 120 seconds. [4162080.589847] Not tainted 4.4.0-1022-aws #31-Ubuntu [4162080.592654] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [4162200.596100] INFO: task jbd2/xvda1-8:271 blocked for more than 120 seconds. [4162200.599646] Not tainted 4.4.0-1022-aws #31-Ubuntu [4162200.602422] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [4162200.606423] INFO: task dhclient:696 blocked for more than 120 seconds. [4162200.610118] Not tainted 4.4.0-1022-aws #31-Ubuntu [4162200.613093] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [4162200.617889] INFO: task monit:13224 blocked for more than 120 seconds. [4162200.621641] Not tainted 4.4.0-1022-aws #31-Ubuntu [4162200.624506] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [4162244.551431] systemd[1]: Failed to start Journal Service. [4162320.628099] INFO: task jbd2/xvda1-8:271 blocked for more than 120 seconds. [4162320.631942] Not tainted 4.4.0-1022-aws #31-Ubuntu [4162320.635012] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [4162320.639647] INFO: task dhclient:696 blocked for more than 120 seconds. [4162320.643241] Not tainted 4.4.0-1022-aws #31-Ubuntu [4162320.646233] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [4162320.650712] INFO: task monit:13224 blocked for more than 120 seconds. [4162320.654190] Not tainted 4.4.0-1022-aws #31-Ubuntu [4162320.657183] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [4162334.801390] systemd[1]: Failed to start Journal Service. [4162425.051503] systemd[1]: Failed to start Journal Service. [4162515.301393] systemd[1]: Failed to start Journal Service. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Nov 20 13:01:03 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 20 Nov 2017 16:01:03 +0300 Subject: Issue with flooded warning and request limiting In-Reply-To: References: Message-ID: <20171120130102.GD62893@mdounin.ru> Hello! On Mon, Nov 20, 2017 at 11:33:26AM +0100, Stephan Ryer wrote: > We are using nginx as a proxy server in front of our IIS servers. > > We have a client who needs to call us up to 200 times per second. Due to > the roundtrip-time, 16 simultanious connections are opened from the client > and each connection is used independently to send a https request, wait for > x ms and then send again. > > I have been doing some tests and looked into the throttle logic in the > nginx-code. It seems that when setting request limit to 200/sec it is > actually interpreted as ?minimum 5ms per call? in the code. If we receive 2 > calls at the same time, the warning log will show an ?excess?-message and > the call will be delayed to ensure a minimum of 5ms between the calls.. > (and if no burst is set, it will be an error message in the log and an > error will be returned to the client) > > We have set burst to 20 meaning, that when our client only sends 1 request > at a time per connection, he will never get an error reply from nginx, > instead nginx just delays the call. I conclude that this is by design. Yes, the code counts average request rate, and if it sees two requests with just 1ms between them the averate rate will be 1000 requests per second. This is more than what is allowed, and hence nginx will either delay the second request (unless configured with "nodelay"), or will reject it if the configured burst size is reached. > The issue, however, is that a client using multiple connections naturally > often wont be able to time the calls between each connection. And even > though our burst has been set to 20, our log is spawned by warning-messages > which I do not think should be a warning at all. There is a difference > between sending 2 calls at the same time and sending a total of 201 > requests within a second, the latter being the only case I would expect to > be logged as a warning. If you are not happy with log levels used, you can easily tune them using the limit_req_log_level directive. See http://nginx.org/r/limit_req_log_level for details. Note well that given the use case description, you probably don't need requests to be delayed at all, so consider using "limit_req .. nodelay;". It will avoid delaying logic altogether, thus allowing as many requests as burst permits. > Instead of calculating the throttling by simply looking at the last call > time and calculate a minimum timespan between last call and current call, I > would like the logic to be that nginx keeps a counter of the number of > requests withing the current second, and when the second expires and a new > seconds exists, the counter Is reset. This approach is not scalable. For example, it won't allow to configure a limit of 1 request per minute. Moreover, it can easily allow more requests in a single second than configured - for example, a client can do 200 requests at 0.999 and additional 200 requests at 1.000. According to your algorithm, this is allowed, yet it 400 requests in just 2 milliseconds. The current implementation is much more robust, and it can be configured for various use cases. In particular, if you want to maintain limit of 200 requests per second and want to tolerate cases when a client does all requests allowed within a second at the same time, consider: limit_req_zone $binary_remote_addr zone=one:10m rate=200r/s; limit_req zone=one burst=200 nodelay; This will switch off delays as already suggested above, and will allow burst of up to 200 requests - that is, a client is allowed to do all 200 requests when a second starts. (If you really want to allow the case with 400 requests in 2 milliseconds as described above, consider using burst=400.) -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Mon Nov 20 13:20:08 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 20 Nov 2017 16:20:08 +0300 Subject: Issue with AWS NLB and nginx In-Reply-To: References: Message-ID: <20171120132008.GE62893@mdounin.ru> Hello! On Mon, Nov 20, 2017 at 12:31:59PM +0100, DreamWerx wrote: > I was hoping someone might have an idea here.. I have a number of nginx > doing load balancing sitting behind AWS's network load balancers (TCP) - > which seem to only support TCP checks. > > Recently a few have stopped working / frozen - they still seem to accept a > tcp connection from the NLB - which leads the health check not to fail. > But they cannot internally process the request and you cannot even ssh into > the machine. A reboot is required and that takes longer than normal. > > I think the failure is related to a disk issue since the only error in the > entire logs where regarding the disk. (error logs below) > > Ideally if nginx or the O/S fails it would be better if the port just > closed. I've considered writing a small daemon that monitors via http > locally and keeps a port open if everything is ok. > > These machines have been running for months now without any issues until > now. > > Anyone have an idea? Once nginx is blocked on disk, it likely won't be able to do anything else - including closing ports, or accepting connections. Native TCP checks will still be able to see it as alive for some time though, as they really check that the port is still open. Such check will probably only recognize that the service is down only when listen queue will be overflowed. Given the above, it is generally a good idea to monitor not just ports, but some meaningful answers from a service. You should be able to configure such checks in AWS. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Mon Nov 20 13:49:40 2017 From: nginx-forum at forum.nginx.org (shivramg94) Date: Mon, 20 Nov 2017 08:49:40 -0500 Subject: Nginx reload intermittenlty fails when protocol specified in proxy_pass directive is specified as HTTPS Message-ID: I am trying to use nginx as a reverse proxy with upstream SSL. For this, I am using the below directive in the nginx configuration file proxy_pass https://; where "" is another file which has the list of upstream servers. upstream { server : weight=1; keepalive 100; } With this configuration if I try to reload the Nginx configuration, it fails intermittently with the below error message nginx: [emerg] host not found in upstream \"\" However, if I changed the protocol mentioned in the proxy_pass directive from https to http, then the reload goes through. Could anyone please explain what mistake I might be doing here? Thanks in advance. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277399,277399#msg-277399 From sr at inmobile.dk Mon Nov 20 14:28:36 2017 From: sr at inmobile.dk (Stephan Ryer) Date: Mon, 20 Nov 2017 15:28:36 +0100 Subject: Issue with flooded warning and request limiting In-Reply-To: <20171120130102.GD62893@mdounin.ru> References: <20171120130102.GD62893@mdounin.ru> Message-ID: Thank you very much for clearing this out. All I need to do is "limit_req_log_level warn;" and then I see limits as warn-logs and delaying as info, and hence I only view warn+ levels, it is omitted from the logfile completely. --- Med venlig hilsen / Best Regards Stephan Ryer M?ller Partner & CTO inMobile ApS Axel Kiers Vej 18L DK-8270 H?jbjerg Dir. +45 82 82 66 92 E-mail: sr at inmobile.dk Web: www.inmobile.dk Tel: +45 88 33 66 99 2017-11-20 14:01 GMT+01:00 Maxim Dounin : > Hello! > > On Mon, Nov 20, 2017 at 11:33:26AM +0100, Stephan Ryer wrote: > > > We are using nginx as a proxy server in front of our IIS servers. > > > > We have a client who needs to call us up to 200 times per second. Due to > > the roundtrip-time, 16 simultanious connections are opened from the > client > > and each connection is used independently to send a https request, wait > for > > x ms and then send again. > > > > I have been doing some tests and looked into the throttle logic in the > > nginx-code. It seems that when setting request limit to 200/sec it is > > actually interpreted as ?minimum 5ms per call? in the code. If we > receive 2 > > calls at the same time, the warning log will show an ?excess?-message and > > the call will be delayed to ensure a minimum of 5ms between the calls.. > > (and if no burst is set, it will be an error message in the log and an > > error will be returned to the client) > > > > We have set burst to 20 meaning, that when our client only sends 1 > request > > at a time per connection, he will never get an error reply from nginx, > > instead nginx just delays the call. I conclude that this is by design. > > Yes, the code counts average request rate, and if it sees two > requests with just 1ms between them the averate rate will be 1000 > requests per second. This is more than what is allowed, and hence > nginx will either delay the second request (unless configured with > "nodelay"), or will reject it if the configured burst size is > reached. > > > The issue, however, is that a client using multiple connections naturally > > often wont be able to time the calls between each connection. And even > > though our burst has been set to 20, our log is spawned by > warning-messages > > which I do not think should be a warning at all. There is a difference > > between sending 2 calls at the same time and sending a total of 201 > > requests within a second, the latter being the only case I would expect > to > > be logged as a warning. > > If you are not happy with log levels used, you can easily tune > them using the limit_req_log_level directive. See > http://nginx.org/r/limit_req_log_level for details. > > Note well that given the use case description, you probably don't > need requests to be delayed at all, so consider using "limit_req > .. nodelay;". It will avoid delaying logic altogether, thus > allowing as many requests as burst permits. > > > Instead of calculating the throttling by simply looking at the last call > > time and calculate a minimum timespan between last call and current > call, I > > would like the logic to be that nginx keeps a counter of the number of > > requests withing the current second, and when the second expires and a > new > > seconds exists, the counter Is reset. > > This approach is not scalable. For example, it won't allow to > configure a limit of 1 request per minute. Moreover, it can > easily allow more requests in a single second than configured - > for example, a client can do 200 requests at 0.999 and additional > 200 requests at 1.000. According to your algorithm, this is > allowed, yet it 400 requests in just 2 milliseconds. > > The current implementation is much more robust, and it can be > configured for various use cases. In particular, if you want to > maintain limit of 200 requests per second and want to tolerate > cases when a client does all requests allowed within a second at > the same time, consider: > > limit_req_zone $binary_remote_addr zone=one:10m rate=200r/s; > limit_req zone=one burst=200 nodelay; > > This will switch off delays as already suggested above, and will > allow burst of up to 200 requests - that is, a client is allowed > to do all 200 requests when a second starts. (If you really want > to allow the case with 400 requests in 2 milliseconds as described > above, consider using burst=400.) > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter_booth at me.com Mon Nov 20 16:19:16 2017 From: peter_booth at me.com (Peter Booth) Date: Mon, 20 Nov 2017 11:19:16 -0500 Subject: Issue with flooded warning and request limiting In-Reply-To: References: <20171120130102.GD62893@mdounin.ru> Message-ID: FWIW - I have found rate limiting very useful (with hardware LB as well as nginx) but, because of the inherent burstiness of web traffic, I typically set my threshold to 10x or 20x my expected ?reasonable peak rate.? The rationale is that this is a very crude tool, just one of many that need to work together to protect the backend from both reasonable variations in workload and malicious use. When combined with smart use of browser cache, CDN, microcaching in nginx, canonical names, smart cache key design you can get an inexpensive nginx server to offer similar functionality to a $50k F5 BigIP LTM+WAF at less than 1/10 the cost. But all of these features need to be used delicately, if you want to avoid rejecting valid requests. Peter Sent from my iPhone > On Nov 20, 2017, at 9:28 AM, Stephan Ryer wrote: > > Thank you very much for clearing this out. All I need to do is "limit_req_log_level warn;" and then I see limits as warn-logs and delaying as info, and hence I only view warn+ levels, it is omitted from the logfile completely. > > --- > > Med venlig hilsen / Best Regards > Stephan Ryer M?ller > Partner & CTO > > inMobile ApS > Axel Kiers Vej 18L > DK-8270 H?jbjerg > > Dir. +45 82 82 66 92 > E-mail: sr at inmobile.dk > > Web: www.inmobile.dk > Tel: +45 88 33 66 99 > > 2017-11-20 14:01 GMT+01:00 Maxim Dounin : >> Hello! >> >> On Mon, Nov 20, 2017 at 11:33:26AM +0100, Stephan Ryer wrote: >> >> > We are using nginx as a proxy server in front of our IIS servers. >> > >> > We have a client who needs to call us up to 200 times per second. Due to >> > the roundtrip-time, 16 simultanious connections are opened from the client >> > and each connection is used independently to send a https request, wait for >> > x ms and then send again. >> > >> > I have been doing some tests and looked into the throttle logic in the >> > nginx-code. It seems that when setting request limit to 200/sec it is >> > actually interpreted as ?minimum 5ms per call? in the code. If we receive 2 >> > calls at the same time, the warning log will show an ?excess?-message and >> > the call will be delayed to ensure a minimum of 5ms between the calls.. >> > (and if no burst is set, it will be an error message in the log and an >> > error will be returned to the client) >> > >> > We have set burst to 20 meaning, that when our client only sends 1 request >> > at a time per connection, he will never get an error reply from nginx, >> > instead nginx just delays the call. I conclude that this is by design. >> >> Yes, the code counts average request rate, and if it sees two >> requests with just 1ms between them the averate rate will be 1000 >> requests per second. This is more than what is allowed, and hence >> nginx will either delay the second request (unless configured with >> "nodelay"), or will reject it if the configured burst size is >> reached. >> >> > The issue, however, is that a client using multiple connections naturally >> > often wont be able to time the calls between each connection. And even >> > though our burst has been set to 20, our log is spawned by warning-messages >> > which I do not think should be a warning at all. There is a difference >> > between sending 2 calls at the same time and sending a total of 201 >> > requests within a second, the latter being the only case I would expect to >> > be logged as a warning. >> >> If you are not happy with log levels used, you can easily tune >> them using the limit_req_log_level directive. See >> http://nginx.org/r/limit_req_log_level for details. >> >> Note well that given the use case description, you probably don't >> need requests to be delayed at all, so consider using "limit_req >> .. nodelay;". It will avoid delaying logic altogether, thus >> allowing as many requests as burst permits. >> >> > Instead of calculating the throttling by simply looking at the last call >> > time and calculate a minimum timespan between last call and current call, I >> > would like the logic to be that nginx keeps a counter of the number of >> > requests withing the current second, and when the second expires and a new >> > seconds exists, the counter Is reset. >> >> This approach is not scalable. For example, it won't allow to >> configure a limit of 1 request per minute. Moreover, it can >> easily allow more requests in a single second than configured - >> for example, a client can do 200 requests at 0.999 and additional >> 200 requests at 1.000. According to your algorithm, this is >> allowed, yet it 400 requests in just 2 milliseconds. >> >> The current implementation is much more robust, and it can be >> configured for various use cases. In particular, if you want to >> maintain limit of 200 requests per second and want to tolerate >> cases when a client does all requests allowed within a second at >> the same time, consider: >> >> limit_req_zone $binary_remote_addr zone=one:10m rate=200r/s; >> limit_req zone=one burst=200 nodelay; >> >> This will switch off delays as already suggested above, and will >> allow burst of up to 200 requests - that is, a client is allowed >> to do all 200 requests when a second starts. (If you really want >> to allow the case with 400 requests in 2 milliseconds as described >> above, consider using burst=400.) >> >> -- >> Maxim Dounin >> http://mdounin.ru/ >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Nov 20 17:46:31 2017 From: nginx-forum at forum.nginx.org (shivramg94) Date: Mon, 20 Nov 2017 12:46:31 -0500 Subject: Nginx reload intermittenlty fails when protocol specified in proxy_pass directive is specified as HTTPS Message-ID: <0c35f98bbb70324f5eacaf343e06d0b8.NginxMailingListEnglish@forum.nginx.org> I am trying to use nginx as a reverse proxy with upstream SSL. For this, I am using the below directive in the nginx configuration file proxy_pass https://; where "" is another file which has the list of upstream servers. upstream { server : weight=1; keepalive 100; } With this configuration if I try to reload the Nginx configuration, it fails intermittently with the below error message nginx: [emerg] host not found in upstream \"\" However, if I changed the protocol mentioned in the proxy_pass directive from https to http, then the reload goes through. Could anyone please explain what mistake I might be doing here? Thanks in advance. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277415,277415#msg-277415 From arozyev at nginx.com Mon Nov 20 18:02:24 2017 From: arozyev at nginx.com (Aziz Rozyev) Date: Mon, 20 Nov 2017 21:02:24 +0300 Subject: Nginx reload intermittenlty fails when protocol specified in proxy_pass directive is specified as HTTPS In-Reply-To: <0c35f98bbb70324f5eacaf343e06d0b8.NginxMailingListEnglish@forum.nginx.org> References: <0c35f98bbb70324f5eacaf343e06d0b8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <729DBC20-2E78-4BA8-811F-FBA61736D2D1@nginx.com> Hi, try 1) curl -ivvv https:// to your upstreams. 2) add server :443 (if your upstreams accepting ssl connections on 443) br, Aziz. > On 20 Nov 2017, at 20:46, shivramg94 wrote: > > I am trying to use nginx as a reverse proxy with upstream SSL. For this, I > am using the below directive in the nginx configuration file > > proxy_pass https://; > > where "" is another file which has the list of > upstream servers. > > upstream { > server : weight=1; > keepalive 100; > } > > With this configuration if I try to reload the Nginx configuration, it fails > intermittently with the below error message > > nginx: [emerg] host not found in upstream \"\" > > However, if I changed the protocol mentioned in the proxy_pass directive > from https to http, then the reload goes through. > > Could anyone please explain what mistake I might be doing here? > > Thanks in advance. > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277415,277415#msg-277415 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Mon Nov 20 18:41:00 2017 From: nginx-forum at forum.nginx.org (shivramg94) Date: Mon, 20 Nov 2017 13:41:00 -0500 Subject: Nginx reload intermittenlty fails when protocol specified in proxy_pass directive is specified as HTTPS In-Reply-To: <729DBC20-2E78-4BA8-811F-FBA61736D2D1@nginx.com> References: <729DBC20-2E78-4BA8-811F-FBA61736D2D1@nginx.com> Message-ID: <60f1614df155d3751234015938de0b47.NginxMailingListEnglish@forum.nginx.org> Just one quick question. Does Nginx check if the upstream servers are reachable via the specified protocol, during the reload process? If say, in this case the upstreams are not accepting ssl connections, will the reload fail? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277399,277418#msg-277418 From mdounin at mdounin.ru Mon Nov 20 18:48:30 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 20 Nov 2017 21:48:30 +0300 Subject: Nginx reload intermittenlty fails when protocol specified in proxy_pass directive is specified as HTTPS In-Reply-To: <0c35f98bbb70324f5eacaf343e06d0b8.NginxMailingListEnglish@forum.nginx.org> References: <0c35f98bbb70324f5eacaf343e06d0b8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20171120184830.GL62893@mdounin.ru> Hello! On Mon, Nov 20, 2017 at 12:46:31PM -0500, shivramg94 wrote: > I am trying to use nginx as a reverse proxy with upstream SSL. For this, I > am using the below directive in the nginx configuration file > > proxy_pass https://; > > where "" is another file which has the list of > upstream servers. > > upstream { > server : weight=1; > keepalive 100; > } > > With this configuration if I try to reload the Nginx configuration, it fails > intermittently with the below error message > > nginx: [emerg] host not found in upstream \"\" > > However, if I changed the protocol mentioned in the proxy_pass directive > from https to http, then the reload goes through. > > Could anyone please explain what mistake I might be doing here? Most likely you are trying to use the same upstream block in both "proxy_pass http://..." and "proxy_pass https://...", and define upstream after it is used in proxy_pass. That is, your configuration is essentially as follows: server { location / { proxy_pass http://u; } ... } server { location / { proxy_pass https://u; } ... } upstream u { server 127.0.0.1:8080; } Due to implementation details this won't properly use upstream "u" in both first and second servers (some additional details can be found at https://trac.nginx.org/nginx/ticket/1059). Trivial fix is to move upstream block before the servers, that is, to define it before it is used. Note though that this will result in an incorrect configuration, as the same server (127.0.0.1:8080 in the above example) will be used for both http and https connections, and this is not going to work either for http or for https, depending on how the backend is configured. Instead, you probably want to define two distinct upstream blocks for http and https with different ports. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Tue Nov 21 15:26:14 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 21 Nov 2017 18:26:14 +0300 Subject: nginx-1.13.7 Message-ID: <20171121152614.GQ62893@mdounin.ru> Changes with nginx 1.13.7 21 Nov 2017 *) Bugfix: in the $upstream_status variable. *) Bugfix: a segmentation fault might occur in a worker process if a backend returned a "101 Switching Protocols" response to a subrequest. *) Bugfix: a segmentation fault occurred in a master process if a shared memory zone size was changed during a reconfiguration and the reconfiguration failed. *) Bugfix: in the ngx_http_fastcgi_module. *) Bugfix: nginx returned the 500 error if parameters without variables were specified in the "xslt_stylesheet" directive. *) Workaround: "gzip filter failed to use preallocated memory" alerts appeared in logs when using a zlib library variant from Intel. *) Bugfix: the "worker_shutdown_timeout" directive did not work when using mail proxy and when proxying WebSocket connections. -- Maxim Dounin http://nginx.org/ From kworthington at gmail.com Tue Nov 21 17:39:45 2017 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 21 Nov 2017 12:39:45 -0500 Subject: [nginx-announce] nginx-1.13.7 In-Reply-To: <20171121152618.GR62893@mdounin.ru> References: <20171121152618.GR62893@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.13.7 for Windows https://kevinworthington.com/n ginxwin1137 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) https://kevinworthington.com/ https://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Nov 21, 2017 at 10:26 AM, Maxim Dounin wrote: > Changes with nginx 1.13.7 21 Nov > 2017 > > *) Bugfix: in the $upstream_status variable. > > *) Bugfix: a segmentation fault might occur in a worker process if a > backend returned a "101 Switching Protocols" response to a > subrequest. > > *) Bugfix: a segmentation fault occurred in a master process if a > shared > memory zone size was changed during a reconfiguration and the > reconfiguration failed. > > *) Bugfix: in the ngx_http_fastcgi_module. > > *) Bugfix: nginx returned the 500 error if parameters without variables > were specified in the "xslt_stylesheet" directive. > > *) Workaround: "gzip filter failed to use preallocated memory" alerts > appeared in logs when using a zlib library variant from Intel. > > *) Bugfix: the "worker_shutdown_timeout" directive did not work when > using mail proxy and when proxying WebSocket connections. > > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas at glanzmann.de Wed Nov 22 09:41:49 2017 From: thomas at glanzmann.de (Thomas Glanzmann) Date: Wed, 22 Nov 2017 10:41:49 +0100 Subject: Set Expires Header only if upstream has not already set an Expires Message-ID: <20171122094149.GD3702@glanzmann.de> Hello, I would like to add an Expires Header only to upstream content that has not already set an Expires header. Is there an easy way to do that with nginx? I thought about trying to add a header_filter_by_lua checking the Expires header and set the necessary value if not already set. Is there an easier way to do the same? Cheers, Thomas From smntov at gmail.com Wed Nov 22 15:34:13 2017 From: smntov at gmail.com (ST) Date: Wed, 22 Nov 2017 17:34:13 +0200 Subject: nginx seems to treat %3F as "?" Message-ID: <1511364853.1675.55.camel@gmail.com> Hello, I have following redirection rule defined: location ~ "^/(.*)\.html[^\?]+" { return 301 /$1.html; } so that everything besides "?" after an URL gets truncated: like example.com/test.html%D1%80%D0%BE%D1%80%D0%BB -> example.com/test.html however it doesn't work when "?" is url encoded into %3F. I would like example.com/test.html%3F to redirect to example.com/test.html Is it possible somehow? Thank you! From ndeaks at outlook.com Wed Nov 22 18:28:45 2017 From: ndeaks at outlook.com (Noel Deacon) Date: Wed, 22 Nov 2017 18:28:45 +0000 Subject: nginx reload issues Message-ID: Hi, I have a centos7 server as a reverse proxy. When I either make a change to an existing v.hosts file or create a new one and test the set up with "nginx -t", and it works, I trigger the changes with either "systemctl reload nginx" or "nginx -s reload". Unfortunately this seems to no longer reload the new config? If I actually restart the Nginx server itself ("systemctl restart nginx") it works but obviously this breaks current connections. There are no errors in the log files Any ideas as to what may be causing the problem? -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Nov 22 22:11:30 2017 From: francis at daoine.org (Francis Daly) Date: Wed, 22 Nov 2017 22:11:30 +0000 Subject: nginx seems to treat %3F as "?" In-Reply-To: <1511364853.1675.55.camel@gmail.com> References: <1511364853.1675.55.camel@gmail.com> Message-ID: <20171122221130.GL3127@daoine.org> On Wed, Nov 22, 2017 at 05:34:13PM +0200, ST wrote: Hi there, > I have following redirection rule defined: > > location ~ "^/(.*)\.html[^\?]+" { That says: /anything.html followed by one-or-more things that are not ?. Note that "location" works on the unencoded url version, and does not include the ? that marks the query string, or anything after it. > return 301 /$1.html; > } > > so that everything besides "?" after an URL gets truncated: No. *everything* after .html gets removed in the 301 response, provided that there is something immediately after .html that is not a ? (which would be %3F in the original url, because ? is special). > like > example.com/test.html%D1%80%D0%BE%D1%80%D0%BB -> example.com/test.html The thing immediately after .html is the unencoded version of %D1, which is not ?, so the location matches and the rewrite happens. > however it doesn't work when "?" is url encoded into %3F. I would like > example.com/test.html%3F to redirect to example.com/test.html Your location block explicitly excludes that case. Why? As in: I do not understand what the thing that you are trying to achieve is. Can you explain it? Perhaps with examples of some things that should be rewritten and some things that should not be? At a guess, perhaps location ~ ^/(.*)\.html. { is what you want? Starts with /, includes .html, and is followed by anything. That should match /a.htmlx and /a.html%3Fx, but not /a.html or /a.html?x=y f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Nov 22 23:26:33 2017 From: francis at daoine.org (Francis Daly) Date: Wed, 22 Nov 2017 23:26:33 +0000 Subject: Set Expires Header only if upstream has not already set an Expires In-Reply-To: <20171122094149.GD3702@glanzmann.de> References: <20171122094149.GD3702@glanzmann.de> Message-ID: <20171122232633.GM3127@daoine.org> On Wed, Nov 22, 2017 at 10:41:49AM +0100, Thomas Glanzmann wrote: Hi there, > I would like to add an Expires Header only to upstream content that has > not already set an Expires header. Is there an easy way to do that with > nginx? http://nginx.org/r/expires has an example of setting a value based on $sent_http_content_type. You can set a value based on $upstream_http_expires -- { default off; "" 7d; } in the appropriate "map" should set your Expires time to 7 days from now if there is not an Expires: header from the upstream. f -- Francis Daly francis at daoine.org From smntov at gmail.com Thu Nov 23 09:24:17 2017 From: smntov at gmail.com (ST) Date: Thu, 23 Nov 2017 11:24:17 +0200 Subject: nginx seems to treat %3F as "?" In-Reply-To: <20171122221130.GL3127@daoine.org> References: <1511364853.1675.55.camel@gmail.com> <20171122221130.GL3127@daoine.org> Message-ID: <1511429057.14360.1.camel@gmail.com> On Wed, 2017-11-22 at 22:11 +0000, Francis Daly wrote: > On Wed, Nov 22, 2017 at 05:34:13PM +0200, ST wrote: > > Hi there, > > > I have following redirection rule defined: > > > > location ~ "^/(.*)\.html[^\?]+" { > > That says: /anything.html followed by one-or-more things that are not ?. > > Note that "location" works on the unencoded url version, and does not > include the ? that marks the query string, or anything after it. > > > return 301 /$1.html; > > } > > > > so that everything besides "?" after an URL gets truncated: > > No. > > *everything* after .html gets removed in the 301 response, provided > that there is something immediately after .html that is not a ? (which > would be %3F in the original url, because ? is special). > > > like > > example.com/test.html%D1%80%D0%BE%D1%80%D0%BB -> example.com/test.html > > The thing immediately after .html is the unencoded version of %D1, which is > not ?, so the location matches and the rewrite happens. > > > however it doesn't work when "?" is url encoded into %3F. I would like > > example.com/test.html%3F to redirect to example.com/test.html > > Your location block explicitly excludes that case. > > Why? > > As in: I do not understand what the thing that you are trying to achieve > is. Can you explain it? Perhaps with examples of some things that should > be rewritten and some things that should not be? > > At a guess, perhaps > > location ~ ^/(.*)\.html. { > > is what you want? Starts with /, includes .html, and is followed by anything. > > That should match /a.htmlx and /a.html%3Fx, but not /a.html or /a.html?x=y Thank you very much! That was exactly what I needed! From nginx-forum at forum.nginx.org Thu Nov 23 15:00:14 2017 From: nginx-forum at forum.nginx.org (sh4ka) Date: Thu, 23 Nov 2017 10:00:14 -0500 Subject: Nginx + SSL problem for old browsers Message-ID: Hello everybody, I have one single website running a RapidSSL certificate, that doesn't work on old mobile phones and browsers, like Symbian. My customer, however, insist in having this site with SSL fully compatible with old browsers. I am already using and old cipher for old browsers generated at https://mozilla.github.io/server-side-tls/ssl-config-generator/ However, still doesn't work. Just in case, on the same server I serve lot of other SSL certificates, all sharing the same IP. This is my current Nginx configuration for this site : # SSL config listen 443 ssl; ssl on; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:ECDHE-RSA-DES-CBC3-SHA:ECDHE-ECDSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:DES-CBC3-SHA:HIGH:SEED:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!RSAPSK:!aDH:!aECDH:!EDH-DSS-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA:!SRP'; ssl_prefer_server_ciphers On; ssl_dhparam /etc/nginx/dhparams.pem; ssl_certificate /etc/nginx/ssl.crt/www.mysite.com.crt; ssl_certificate_key /etc/nginx/ssl.key/www.mysite.com.key; ssl_session_cache shared:SSL:20m; ssl_session_timeout 10m; # SSL config Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277461,277461#msg-277461 From lagged at gmail.com Thu Nov 23 15:00:52 2017 From: lagged at gmail.com (Andrei) Date: Thu, 23 Nov 2017 09:00:52 -0600 Subject: Migrating from Varnish Message-ID: Hi all, I've been using Varnish for 4 years now, but quite frankly I'm tired of using it for HTTP traffic and Nginx for SSL offloading when Nginx can just handle it all. One of the main issues I'm running into with the transition is related to cache purging, and setting custom expiry TTL's per zone/domain. My questions are: - Does anyone have any recent working documentation on supported modules/Lua scripts which can achieve wildcard purges as well as specific URL purges? - How should I go about defining custom cache TTL's for: frontpage, dynamic, and static content requests? Currently I have Varnish configured to set the ttl's based on request headers which are added in the config with regex matches against the host being accessed. Any other caveats or suggestions I should possibly know of? --Andrei -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Nov 23 15:18:13 2017 From: nginx-forum at forum.nginx.org (sh4ka) Date: Thu, 23 Nov 2017 10:18:13 -0500 Subject: Nginx + SSL problem for old browsers In-Reply-To: References: Message-ID: Could this be because of the lack of SNI support on old browsers? Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277461,277463#msg-277463 From lagged at gmail.com Thu Nov 23 15:23:26 2017 From: lagged at gmail.com (Andrei) Date: Thu, 23 Nov 2017 09:23:26 -0600 Subject: Migrating from Varnish In-Reply-To: References: Message-ID: To follow up on the purge implementation, I would like to avoid going through the entire cache dir for a wildcard request, as the sites I have stack up over 200k objects. I'm wondering if there would be a clean way of taking a passive route, through which cache would be invalidated/"refreshed" by subsequent requests. As in I send a purge request for https://domain.com/.*, and subsequent requests for cached items would then fetch the request from the backend, and update the cache. If that makes any sense.. On Nov 23, 2017 17:00, "Andrei" wrote: > Hi all, > > I've been using Varnish for 4 years now, but quite frankly I'm tired of > using it for HTTP traffic and Nginx for SSL offloading when Nginx can just > handle it all. One of the main issues I'm running into with the transition > is related to cache purging, and setting custom expiry TTL's per > zone/domain. My questions are: > > - Does anyone have any recent working documentation on supported > modules/Lua scripts which can achieve wildcard purges as well as specific > URL purges? > > - How should I go about defining custom cache TTL's for: frontpage, > dynamic, and static content requests? Currently I have Varnish configured > to set the ttl's based on request headers which are added in the config > with regex matches against the host being accessed. > > Any other caveats or suggestions I should possibly know of? > > --Andrei > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Nov 23 15:55:40 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 23 Nov 2017 18:55:40 +0300 Subject: Migrating from Varnish In-Reply-To: References: Message-ID: <20171123155540.GF78325@mdounin.ru> Hello! On Thu, Nov 23, 2017 at 09:00:52AM -0600, Andrei wrote: > Hi all, > > I've been using Varnish for 4 years now, but quite frankly I'm tired of > using it for HTTP traffic and Nginx for SSL offloading when Nginx can just > handle it all. One of the main issues I'm running into with the transition > is related to cache purging, and setting custom expiry TTL's per > zone/domain. My questions are: > > - Does anyone have any recent working documentation on supported > modules/Lua scripts which can achieve wildcard purges as well as specific > URL purges? Cache purging is available in nginx-plus, see http://nginx.org/r/proxy_cache_purge. > - How should I go about defining custom cache TTL's for: frontpage, > dynamic, and static content requests? Currently I have Varnish configured > to set the ttl's based on request headers which are added in the config > with regex matches against the host being accessed. Normal nginx approach is to configure distinct server{} and location{} blocks for different content, with appropriate cache validity times. For example: server { listen 80; server_name foo.example.com; location / { proxy_pass http://backend; proxy_cache one; proxy_cache_valid 200 5m; } location /static/ { proxy_pass http://backend; proxy_cache one; proxy_cache_valid 200 24h; } } Note well that by default nginx respects what is returned by the backend in various response headers, and proxy_cache_valid time only applies if there are no explicit cache validity time set, see http://nginx.org/r/proxy_ignore_headers. -- Maxim Dounin http://mdounin.ru/ From lagged at gmail.com Thu Nov 23 16:24:19 2017 From: lagged at gmail.com (Andrei) Date: Thu, 23 Nov 2017 10:24:19 -0600 Subject: Migrating from Varnish In-Reply-To: <20171123155540.GF78325@mdounin.ru> References: <20171123155540.GF78325@mdounin.ru> Message-ID: Hello Maxim! On Nov 23, 2017 17:55, "Maxim Dounin" wrote: Hello! On Thu, Nov 23, 2017 at 09:00:52AM -0600, Andrei wrote: > Hi all, > > I've been using Varnish for 4 years now, but quite frankly I'm tired of > using it for HTTP traffic and Nginx for SSL offloading when Nginx can just > handle it all. One of the main issues I'm running into with the transition > is related to cache purging, and setting custom expiry TTL's per > zone/domain. My questions are: > > - Does anyone have any recent working documentation on supported > modules/Lua scripts which can achieve wildcard purges as well as specific > URL purges? Cache purging is available in nginx-plus, see http://nginx.org/r/proxy_cache_purge. I'm aware of the paid version, but I don't have a budget for it yet, and quite frankly this should be a core feature for any caching service. Are there no viable options for the community release? It's a rather pertinent feature to have in my transition > - How should I go about defining custom cache TTL's for: frontpage, > dynamic, and static content requests? Currently I have Varnish configured > to set the ttl's based on request headers which are added in the config > with regex matches against the host being accessed. Normal nginx approach is to configure distinct server{} and location{} blocks for different content, with appropriate cache validity times. For example: server { listen 80; server_name foo.example.com; location / { proxy_pass http://backend; proxy_cache one; proxy_cache_valid 200 5m; } location /static/ { proxy_pass http://backend; proxy_cache one; proxy_cache_valid 200 24h; } } Note well that by default nginx respects what is returned by the backend in various response headers, and proxy_cache_valid time only applies if there are no explicit cache validity time set, see http://nginx.org/r/proxy_ignore_headers. So to override the ttls set by the backend, I would have to use proxy_ignore_headers for all headers which can directly affect the intended TTL? Thank you for your time! -- Maxim Dounin http://mdounin.ru/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Nov 23 17:52:31 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 23 Nov 2017 20:52:31 +0300 Subject: Migrating from Varnish In-Reply-To: References: <20171123155540.GF78325@mdounin.ru> Message-ID: <20171123175231.GJ78325@mdounin.ru> Hello! On Thu, Nov 23, 2017 at 10:24:19AM -0600, Andrei wrote: > > > - Does anyone have any recent working documentation on supported > > > modules/Lua scripts which can achieve wildcard purges as well as specific > > > URL purges? > > > > Cache purging is available in nginx-plus, see > > http://nginx.org/r/proxy_cache_purge. > > I'm aware of the paid version, but I don't have a budget for it yet, and > quite frankly this should be a core feature for any caching service. Are > there no viable options for the community release? It's a rather pertinent > feature to have in my transition I'm aware of at least one 3rd party module from Piotr Sikora, https://github.com/FRiCKLE/ngx_cache_purge. I've never tried to use it, and AFAIK it doesn't support wildcard purges. It is mostly known to developers as a hack that starts segfaulting on unrelated changes in proxy module, so obviously enough I can't recommend using it. Note though that I personally think that cache purging is something that should _not_ be present in any caching service, and I wouldn't recommend using nginx-plus functionality either. Proper controlling of cache validity times is something that should be used instead. This is what happens in browsers anyway, and trying to "purge" things there won't work. > > Note well that by default nginx respects what is returned by the > > backend in various response headers, and proxy_cache_valid time > > only applies if there are no explicit cache validity time set, see > > http://nginx.org/r/proxy_ignore_headers. > > So to override the ttls set by the backend, I would have to use > proxy_ignore_headers for all headers which can directly affect the intended > TTL? Yes, if you want to ignore what the backend set. In many cases this might not be a good idea though. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Thu Nov 23 17:53:49 2017 From: nginx-forum at forum.nginx.org (itpp2012) Date: Thu, 23 Nov 2017 12:53:49 -0500 Subject: Migrating from Varnish In-Reply-To: References: Message-ID: <0c6a62d9152c05c31e28b3a73f0c8210.NginxMailingListEnglish@forum.nginx.org> Andrei Wrote: ------------------------------------------------------- > I'm aware of the paid version, but I don't have a budget for it yet, > and > quite frankly this should be a core feature for any caching service. > Are > there no viable options for the community release? It's a rather https://github.com/FRiCKLE/ngx_cache_purge/ Easy to implement (add). Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277462,277476#msg-277476 From lagged at gmail.com Thu Nov 23 20:32:19 2017 From: lagged at gmail.com (Andrei) Date: Thu, 23 Nov 2017 14:32:19 -0600 Subject: Migrating from Varnish In-Reply-To: <20171123175231.GJ78325@mdounin.ru> References: <20171123155540.GF78325@mdounin.ru> <20171123175231.GJ78325@mdounin.ru> Message-ID: Hello, On Thu, Nov 23, 2017 at 11:52 AM, Maxim Dounin wrote: > Hello! > > On Thu, Nov 23, 2017 at 10:24:19AM -0600, Andrei wrote: > > > > > - Does anyone have any recent working documentation on supported > > > > modules/Lua scripts which can achieve wildcard purges as well as > specific > > > > URL purges? > > > > > > Cache purging is available in nginx-plus, see > > > http://nginx.org/r/proxy_cache_purge. > > > > I'm aware of the paid version, but I don't have a budget for it yet, and > > quite frankly this should be a core feature for any caching service. Are > > there no viable options for the community release? It's a rather > pertinent > > feature to have in my transition > > I'm aware of at least one 3rd party module from Piotr Sikora, > https://github.com/FRiCKLE/ngx_cache_purge. I've never tried to > use it, and AFAIK it doesn't support wildcard purges. It is > mostly known to developers as a hack that starts segfaulting on > unrelated changes in proxy module, so obviously enough I can't > recommend using it. > > Thanks for mentioning the segfaulting issues, definitely not something I want to run into. I saw that one, and some more details with Lua in these: https://scene-si.org/2017/01/08/improving-nginx-lua-cache-purges/ https://scene-si.org/2016/11/02/purging-cached-items-from-nginx-with-lua/ http://syshero.org/post/68479556365/nginx-passive-cache-invalidation >From what I'm seeing available, it looks as though my best bet is Lua, or pushing all the purge requests to a custom backend service that's going to queue/handle the file removals. Redis tracking also comes to mind since I'm going to be doing live stats for the traffic so that's another hop I have to factor in. With these options I'm thinking of doing a cache zone per domain, or perhaps group of domains. Are there any performance impacts from having for example tens or hundreds of cache zones defined? Note though that I personally think that cache purging is > something that should _not_ be present in any caching service, and > I wouldn't recommend using nginx-plus functionality either. > Proper controlling of cache validity times is something that > should be used instead. This is what happens in browsers anyway, > and trying to "purge" things there won't work. > I'm sorry but I strongly disagree here. Every respectable CDN service which offers caching, also offers purging. People want their content updated on edge service when changes are made to their application, and they want their applications to be able to talk to edge services. Take a busy news site for example. When a tag/post/page is updated, they expect viewers to be able to see it right then and there, not when cache expires. If they wait for cache to expire, they lose viewers and $$ due to delays. > > > Note well that by default nginx respects what is returned by the > > > backend in various response headers, and proxy_cache_valid time > > > only applies if there are no explicit cache validity time set, see > > > http://nginx.org/r/proxy_ignore_headers. > > > > So to override the ttls set by the backend, I would have to use > > proxy_ignore_headers for all headers which can directly affect the > intended > > TTL? > > Yes, if you want to ignore what the backend set. In many cases > this might not be a good idea though. > I understand why it's not always a good idea. I have numerous checks and balances accumulated over the years at the moment which I'm working on porting over. Overriding backend cache headers on a granular level is something I enjoy :) > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lagged at gmail.com Thu Nov 23 20:35:06 2017 From: lagged at gmail.com (Andrei) Date: Thu, 23 Nov 2017 14:35:06 -0600 Subject: Migrating from Varnish In-Reply-To: <0c6a62d9152c05c31e28b3a73f0c8210.NginxMailingListEnglish@forum.nginx.org> References: <0c6a62d9152c05c31e28b3a73f0c8210.NginxMailingListEnglish@forum.nginx.org> Message-ID: Thanks for the tip. Have you ran into any issues as Maxim mentioned? On Thu, Nov 23, 2017 at 11:53 AM, itpp2012 wrote: > Andrei Wrote: > ------------------------------------------------------- > > I'm aware of the paid version, but I don't have a budget for it yet, > > and > > quite frankly this should be a core feature for any caching service. > > Are > > there no viable options for the community release? It's a rather > > https://github.com/FRiCKLE/ngx_cache_purge/ > > Easy to implement (add). > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,277462,277476#msg-277476 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Nov 23 21:51:26 2017 From: nginx-forum at forum.nginx.org (itpp2012) Date: Thu, 23 Nov 2017 16:51:26 -0500 Subject: Migrating from Varnish In-Reply-To: References: Message-ID: Andrei Wrote: ------------------------------------------------------- > Thanks for the tip. Have you ran into any issues as Maxim mentioned? > Not yet. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277462,277487#msg-277487 From dfrancislyon at yahoo.com Fri Nov 24 02:25:14 2017 From: dfrancislyon at yahoo.com (Daniel Francis-Lyon) Date: Thu, 23 Nov 2017 18:25:14 -0800 Subject: Please take me off the mailing list Message-ID: <62c5cb73-fe19-4725-9e89-f83246a37df5@email.android.com> An HTML attachment was scrubbed... URL: From fsantiago at garbage-juice.com Fri Nov 24 02:47:22 2017 From: fsantiago at garbage-juice.com (Fabian A. Santiago) Date: Thu, 23 Nov 2017 21:47:22 -0500 Subject: Please take me off the mailing list In-Reply-To: <62c5cb73-fe19-4725-9e89-f83246a37df5@email.android.com> References: <62c5cb73-fe19-4725-9e89-f83246a37df5@email.android.com> Message-ID: On November 23, 2017 9:25:14 PM EST, Daniel Francis-Lyon via nginx wrote: >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx All messages have a link at the bottom. Follow it and unsubscribe yourself. -- Thanks, Fabian S. OpenPGP: 3C3FA072ACCB7AC5DB0F723455502B0EEB9070FC From nginx-forum at forum.nginx.org Fri Nov 24 08:04:11 2017 From: nginx-forum at forum.nginx.org (joseph-pg) Date: Fri, 24 Nov 2017 03:04:11 -0500 Subject: Cannot build 1.12.x on MSYS2/MinGW64 Message-ID: <8d2c4d68edf4bd3203382d5ecd0dbefb.NginxMailingListEnglish@forum.nginx.org> Hi, The build fails without this patch. Is there a chance for this to be merged to the 1.12 branch? https://hg.nginx.org/nginx/rev/4a343228c55e Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277492,277492#msg-277492 From nginx-forum at forum.nginx.org Fri Nov 24 08:23:30 2017 From: nginx-forum at forum.nginx.org (crasyangel) Date: Fri, 24 Nov 2017 03:23:30 -0500 Subject: ngx_http_upstream_test_next u->peer.tries > 1 Message-ID: <30c507f7e240bb23c0ef9d6959aad146.NginxMailingListEnglish@forum.nginx.org> assume all servers always fail in upstream nginx would call ngx_http_upstream_next when u->peer.tries > 1, and call ngx_http_upstream_finalize_request directly when u->peer.tries == 1 it would not pass NGX_PEER_FAILED to u->peer.free so how peer->fails increase when last retry fail? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277494,277494#msg-277494 From r at roze.lv Fri Nov 24 10:26:10 2017 From: r at roze.lv (Reinis Rozitis) Date: Fri, 24 Nov 2017 12:26:10 +0200 Subject: Nginx + SSL problem for old browsers In-Reply-To: References: Message-ID: <000001d3650e$a5cd9ea0$f168dbe0$@roze.lv> > Could this be because of the lack of SNI support on old browsers? It's possible. But you should be able to tell on the client itself - if there is ssl cert domain mismatch or the client can't connect at all. Imo the easiest way to check https://www.ssllabs.com/ssltest/ p.s. newer Openssl (1.1.x) versions by default disable weak ciphers. So depending on how your nginx/openssl is compiled the actual cipher list might be different to the configuration line. rr From mdounin at mdounin.ru Fri Nov 24 13:14:39 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 24 Nov 2017 16:14:39 +0300 Subject: Cannot build 1.12.x on MSYS2/MinGW64 In-Reply-To: <8d2c4d68edf4bd3203382d5ecd0dbefb.NginxMailingListEnglish@forum.nginx.org> References: <8d2c4d68edf4bd3203382d5ecd0dbefb.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20171124131439.GQ78325@mdounin.ru> Hello! On Fri, Nov 24, 2017 at 03:04:11AM -0500, joseph-pg wrote: > The build fails without this patch. Is there a chance for this to be merged > to the 1.12 branch? > https://hg.nginx.org/nginx/rev/4a343228c55e No. Try 1.13.x instead. -- Maxim Dounin http://mdounin.ru/ From tkadm30 at yandex.com Sat Nov 25 11:38:27 2017 From: tkadm30 at yandex.com (Etienne Robillard) Date: Sat, 25 Nov 2017 06:38:27 -0500 Subject: real time notifications django app In-Reply-To: <16984f12-245f-2bf8-b5be-7edb65950805@yandex.com> References: <16984f12-245f-2bf8-b5be-7edb65950805@yandex.com> Message-ID: <7988434c-9080-f4e2-d71d-d5758a3ab097@yandex.com> Hi, Could it be possible to use Django and asyncio to receive asynchronous redis messages and store them into a ZODB database? (ClientStorage) Is it possible to implement PUSH notifications with uWSGI backend and redis server ? Thank you, Etienne Le 2017-11-25 ? 05:02, Etienne Robillard a ?crit?: > Hi, > > I would like to implement a simple web client for sending asynchronous > messages from RabbitMQ or Redis to a ClientStorage server with > asyncio. This code should ideally run under Python 3.5.3 and WSGI. > > Design ideas: > > - ZODBController class: this will need refactoring > > - Support PostgreSQL schemas in the future > > - Make the code compatible with WSGI environments > > > What do you think? > > > Regards, > > Etienne > -- Etienne Robillard tkadm30 at yandex.com http://www.isotopesoftware.ca/ From dfrancislyon at yahoo.com Sat Nov 25 22:43:29 2017 From: dfrancislyon at yahoo.com (Daniel Francis-Lyon) Date: Sat, 25 Nov 2017 14:43:29 -0800 Subject: Please take me off the mailing list Message-ID: <6691ecb7-cf96-45dc-9c4a-f38cca2efa70@email.android.com> An HTML attachment was scrubbed... URL: From fsantiago at garbage-juice.com Sat Nov 25 23:00:46 2017 From: fsantiago at garbage-juice.com (Fabian A. Santiago) Date: Sat, 25 Nov 2017 18:00:46 -0500 Subject: Please take me off the mailing list In-Reply-To: <6691ecb7-cf96-45dc-9c4a-f38cca2efa70@email.android.com> References: <6691ecb7-cf96-45dc-9c4a-f38cca2efa70@email.android.com> Message-ID: On November 25, 2017 5:43:29 PM EST, Daniel Francis-Lyon via nginx wrote: >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx See the link at the bottom? Follow it and you're able to unsubscribe yourself. -- Thanks, Fabian S. OpenPGP: 3C3FA072ACCB7AC5DB0F723455502B0EEB9070FC From nginx-forum at forum.nginx.org Sun Nov 26 00:44:54 2017 From: nginx-forum at forum.nginx.org (teknopaul) Date: Sat, 25 Nov 2017 19:44:54 -0500 Subject: ngx_msec_t is 32bit on ARM Message-ID: <658c4218cc17fbb00064c654c3710032.NginxMailingListEnglish@forum.nginx.org> I'm trying to compile nginx on for a raspberry pi src/core/ngx_times.c time_t sec; ngx_uint_t msec; struct timeval tv; ngx_gettimeofday(&tv); sec = tv.tv_sec; msec = tv.tv_usec / 1000; ngx_current_msec = (ngx_msec_t) sec * 1000 + msec; ngx_current_msec is defined as a ngx_msec_t which in turn is ngx_uint_t. In an rpi is not big enough to hold Unix epoc in millis. (sec * 1000) nginx code does compile, but my tests fail: they have hardcoded values for the epoc. Is this deliberate? I guess its cropping the high order bits? So millis comparisons might work but timestamps generated from this value might not? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277514,277514#msg-277514 From sca at andreasschulze.de Sun Nov 26 13:17:37 2017 From: sca at andreasschulze.de (A. Schulze) Date: Sun, 26 Nov 2017 14:17:37 +0100 Subject: cts-submit Message-ID: Hello, experiments with nginx-ct ?) show that I need a tool to submit a certificate to some public logs. cts-submit ?) seems useful. But it require me to install php on every host :-/ I know there are also python implementations. but is anybody aware of an implementation in *plain posix shell + openssl* ? Andreas ?) https://github.com/grahamedgecombe/nginx-ct ?) https://github.com/jbvignaud/cts-submit From nginx-forum at forum.nginx.org Mon Nov 27 11:26:00 2017 From: nginx-forum at forum.nginx.org (haloween) Date: Mon, 27 Nov 2017 06:26:00 -0500 Subject: constant X-Cache-Status:MISS on woff files Message-ID: <2133ab16654986b10f447815ecf0a6a5.NginxMailingListEnglish@forum.nginx.org> I have a problem with cacheing. Following location in my config - perfectly handles all the extensions jpeg, jpg and stuff. location ~* \.(?:ico|pdf|flv|jpg|jpeg|png|gif|swf|x-html|woff|woff2|ttf|eot|map)$ { gzip off; expires 30d; log_not_found off; access_log off; add_header Cache-Control "public"; add_header Access-Control-Allow-Origin *; add_header X-Cache-Status $upstream_cache_status; proxy_cache img_cache_main; proxy_buffers 2048 64k; proxy_buffer_size 128k; proxy_set_header Host "HOSTNAME"; proxy_ignore_headers Cache-Control Vary Expires Set-Cookie X-Accel-Expires; proxy_cache_valid 404 1m; aio threads=default; aio_write on; output_buffers 16 1024k; sendfile on; proxy_pass http://HOSTNAME_appserver; } Unfortunately i get MISS on all woff files :/ Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277522,277522#msg-277522 From mdounin at mdounin.ru Mon Nov 27 13:09:05 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 27 Nov 2017 16:09:05 +0300 Subject: ngx_msec_t is 32bit on ARM In-Reply-To: <658c4218cc17fbb00064c654c3710032.NginxMailingListEnglish@forum.nginx.org> References: <658c4218cc17fbb00064c654c3710032.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20171127130905.GT78325@mdounin.ru> Hello! On Sat, Nov 25, 2017 at 07:44:54PM -0500, teknopaul wrote: > I'm trying to compile nginx on for a raspberry pi > > src/core/ngx_times.c > > time_t sec; > ngx_uint_t msec; > struct timeval tv; > > ngx_gettimeofday(&tv); > sec = tv.tv_sec; > msec = tv.tv_usec / 1000; > ngx_current_msec = (ngx_msec_t) sec * 1000 + msec; > > ngx_current_msec is defined as a ngx_msec_t which in turn is ngx_uint_t. In > an rpi is not big enough to hold Unix epoc in millis. (sec * 1000) > > nginx code does compile, but my tests fail: they have hardcoded values for > the epoc. > > Is this deliberate? I guess its cropping the high order bits? So millis > comparisons might work but timestamps generated from this value might not? Yes, this is intentional. The ngx_current_msec variable (and the ngx_msec_t type) is to be used to effectively implement timers, and hence it uses platform-specific fast integer. As such, it can easily overflow on 32-bit platforms. You have to fix your tests. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Mon Nov 27 13:49:36 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 27 Nov 2017 16:49:36 +0300 Subject: ngx_http_upstream_test_next u->peer.tries > 1 In-Reply-To: <30c507f7e240bb23c0ef9d6959aad146.NginxMailingListEnglish@forum.nginx.org> References: <30c507f7e240bb23c0ef9d6959aad146.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20171127134936.GV78325@mdounin.ru> Hello! On Fri, Nov 24, 2017 at 03:23:30AM -0500, crasyangel wrote: > assume all servers always fail in upstream > > nginx would call ngx_http_upstream_next when u->peer.tries > 1, and call > ngx_http_upstream_finalize_request directly when u->peer.tries == 1 > > it would not pass NGX_PEER_FAILED to u->peer.free > > so how peer->fails increase when last retry fail? The result of the last attempt is returned to the client as is, and it is not considered to be a failure regardless of other settings. In the edge case when there is only one server, requests are simply passed to the server. If there is more than one server, and all servers fail, the first request will disable all servers but the last one (assuming default max_fails=1), and the second request will disable the last server. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Mon Nov 27 15:33:02 2017 From: nginx-forum at forum.nginx.org (abyvatel) Date: Mon, 27 Nov 2017 10:33:02 -0500 Subject: 400 bad requests now returning http headers? ( crossdomain.xml ) In-Reply-To: References: <20140611202129.GF1849@mdounin.ru> Message-ID: <6d84c09acb368025281a471714b2f2cd.NginxMailingListEnglish@forum.nginx.org> Hello, this is possible too late but may useful I have'nt found any solution and just recompiled the Nginx with a little patch i've changed file ./src/http/ngx_http_request.c in function ngx_http_create_request: r->http_version = NGX_HTTP_VERSION_10 to r->http_version = NGX_HTTP_VERSION_9 this may ugly but now working as expected Posted at Nginx Forum: https://forum.nginx.org/read.php?2,250772,277525#msg-277525 From nginx at 16bits.net Mon Nov 27 21:21:28 2017 From: nginx at 16bits.net (=?ISO-8859-1?Q?=C1ngel?=) Date: Mon, 27 Nov 2017 22:21:28 +0100 Subject: cts-submit In-Reply-To: References: Message-ID: <1511817688.944.16.camel@16bits.net> On 2017-11-26 at 14:17 +0100, A. Schulze wrote: > Hello, > > experiments with nginx-ct ?) show that I need a tool to submit a certificate to some public logs. > cts-submit ?) seems useful. But it require me to install php on every host :-/ > > I know there are also python implementations. but > is anybody aware of an implementation in *plain posix shell + openssl* ? > > Andreas Doesn't your CA already submit them to the Certificate Transparency logs? From r1ch+nginx at teamliquid.net Mon Nov 27 22:21:17 2017 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Mon, 27 Nov 2017 23:21:17 +0100 Subject: cts-submit In-Reply-To: <1511817688.944.16.camel@16bits.net> References: <1511817688.944.16.camel@16bits.net> Message-ID: You can use ct-submit, once built the binary can be copied and run on any system without any dependencies. https://github.com/grahamedgecombe/ct-submit On Mon, Nov 27, 2017 at 10:21 PM, ?ngel wrote: > On 2017-11-26 at 14:17 +0100, A. Schulze wrote: > > Hello, > > > > experiments with nginx-ct ?) show that I need a tool to submit a > certificate to some public logs. > > cts-submit ?) seems useful. But it require me to install php on every > host :-/ > > > > I know there are also python implementations. but > > is anybody aware of an implementation in *plain posix shell + openssl* ? > > > > Andreas > > Doesn't your CA already submit them to the Certificate Transparency > logs? > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Nov 28 10:00:08 2017 From: nginx-forum at forum.nginx.org (justink101) Date: Tue, 28 Nov 2017 05:00:08 -0500 Subject: upstream zone size Message-ID: What is a reasonable value for upstream zone size? I'm just shooting in the dark with 64k right now. Running 64bit Linux. The official NGINX documentation does not elaborate on it, and I can't find anything useful on Google. upstream backends { zone example_zone 64k; keepalive 8l; server 10.20.30.2 max_fails=3 fail_timeout=30s; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277533,277533#msg-277533 From mig at 1984.cz Tue Nov 28 11:32:32 2017 From: mig at 1984.cz (mig at 1984.cz) Date: Tue, 28 Nov 2017 12:32:32 +0100 Subject: Nginx cache returns MISS after a few hours, can't be set up to cache "forever" Message-ID: <20171128113232.jmmf7lkk4i3isibj@me.localdomain> Hi, I am trying to cache files "forever". Unfortunately in about 2-6 hours the cache starts to return MISS again. This is the setting: --- proxy_cache_path /var/cache/nginx-cache levels=1:2 keys_zone=mycache:10m max_size=20g inactive=10y; proxy_cache_valid 10y; "Expires" header returned by the upstream is set to the year 2027 and "Cache-Control" to max-age=315360000 (i.e. 10 years). --- I suppose, if was the expiry time the reason, it would have return EXPIRED, but not MISS. The cache fills up to ~5 GB (from allowed 20 GB), so the space should not be the problem. I have tried to remove all cached files and restart nginx, but it did not help. For testing I use plain curl GET requests (without ETag, Vary, etc. headers) - always the same. Thank you for any hint, Jan Molic From arut at nginx.com Tue Nov 28 12:18:31 2017 From: arut at nginx.com (Roman Arutyunyan) Date: Tue, 28 Nov 2017 15:18:31 +0300 Subject: Nginx cache returns MISS after a few hours, can't be set up to cache "forever" In-Reply-To: <20171128113232.jmmf7lkk4i3isibj@me.localdomain> References: <20171128113232.jmmf7lkk4i3isibj@me.localdomain> Message-ID: <20171128121831.GB546@Romans-MacBook-Air.local> Hi, On Tue, Nov 28, 2017 at 12:32:32PM +0100, mig at 1984.cz wrote: > Hi, > > I am trying to cache files "forever". Unfortunately in about 2-6 hours the cache starts to return MISS again. This is the setting: > > --- > > proxy_cache_path /var/cache/nginx-cache levels=1:2 keys_zone=mycache:10m max_size=20g inactive=10y; > > proxy_cache_valid 10y; > > "Expires" header returned by the upstream is set to the year 2027 and "Cache-Control" to max-age=315360000 (i.e. 10 years). > > --- > > I suppose, if was the expiry time the reason, it would have return EXPIRED, but not MISS. > > The cache fills up to ~5 GB (from allowed 20 GB), so the space should not be the problem. > > I have tried to remove all cached files and restart nginx, but it did not help. > > For testing I use plain curl GET requests (without ETag, Vary, etc. headers) - always the same. It's not only disk size that matters. Cache entries may also be evicted when approaching the keyz_zone size. Try increasing the zone size. -- Roman Arutyunyan From peter_booth at me.com Tue Nov 28 12:42:37 2017 From: peter_booth at me.com (Peter Booth) Date: Tue, 28 Nov 2017 07:42:37 -0500 Subject: Nginx cache returns MISS after a few hours, can't be set up to cache "forever" In-Reply-To: <20171128113232.jmmf7lkk4i3isibj@me.localdomain> References: <20171128113232.jmmf7lkk4i3isibj@me.localdomain> Message-ID: <3959A47C-9320-443C-9F1E-66D8D235C644@me.com> Can you count the number of files that are in your cache and whether or not it's changing with time? Then compare with the number of unique cache keys (from your web server log) When the server starts returning a MISS - does it only do this for newer objects that haven?t been requested before? Does it happen for any objects that had previously been returned as a HIT? > On Nov 28, 2017, at 6:32 AM, mig at 1984.cz wrote: > > Hi, > > I am trying to cache files "forever". Unfortunately in about 2-6 hours the cache starts to return MISS again. This is the setting: > > --- > > proxy_cache_path /var/cache/nginx-cache levels=1:2 keys_zone=mycache:10m max_size=20g inactive=10y; > > proxy_cache_valid 10y; > > "Expires" header returned by the upstream is set to the year 2027 and "Cache-Control" to max-age=315360000 (i.e. 10 years). > > --- > > I suppose, if was the expiry time the reason, it would have return EXPIRED, but not MISS. > > The cache fills up to ~5 GB (from allowed 20 GB), so the space should not be the problem. > > I have tried to remove all cached files and restart nginx, but it did not help. > > For testing I use plain curl GET requests (without ETag, Vary, etc. headers) - always the same. > > Thank you for any hint, > Jan Molic > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From iamcarlosmaddaleno at gmail.com Tue Nov 28 15:44:43 2017 From: iamcarlosmaddaleno at gmail.com (carlos maddaleno cuellar) Date: Tue, 28 Nov 2017 09:44:43 -0600 Subject: Fwd: Problem with CAS on nginx configuration In-Reply-To: References: Message-ID: Hello! I wanted to know if some one could help me with a problem i have with my CAS, the problem is that i have a nginx that is responding to three diferent servers as a proxy, the thing is that i put the cas on one instance (server) but when it loads on siampapps(nginx) https://siamppapps.mp/cas it shows that is not navigating on a secure port as you can see [image: Im?genes integradas 2] but when i try directly on the ip of the server and the port it doesn't show any error [image: Im?genes integradas 1] this is my nginx configuration: ------------------------------------------------------------ --------------------------------------------------------- upstream nomina { server siampv4.mp:28080; } upstream siampv3.mp { server siampv3.mp:28083; } upstream siampv5.mp { server siampv5.mp:28080; } server { listen 443; client_max_body_size 8M; ssl on; ssl_certificate /etc/nginx/siampapps.mp.crt; # path to your cacert.pem ssl_certificate_key /etc/nginx/siampapps.mp.key; # path to your privkey.pem server_name test.mp; # ...... fastcgi_param HTTPS on; fastcgi_param HTTP_SCHEME https; #location / { # root /usr/share/nginx/html; # index index.html index.htm; # } location /nomina { proxy_pass http://nomina; } location / { proxy_pass http://siampv3.mp; } location /mailer { proxy_pass http://siampv5.mp; } location /cas { proxy_pass http://siampv5.mp; } } thanks a lot!! -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 72108 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 67285 bytes Desc: not available URL: From nginx-forum at forum.nginx.org Tue Nov 28 16:27:57 2017 From: nginx-forum at forum.nginx.org (pstnta) Date: Tue, 28 Nov 2017 11:27:57 -0500 Subject: domain only reachable with https:// in front Message-ID: Hi, I'm using nginx as reverse proxy for guacamole, I can only reach my domain with https://pstn.host or https://www.pstn.host, it won't work without https or with even with https. here's my sites-enabled/pstn.host https://pastebin.com/raw/dKiEi72q any ideas what's wrong or missing? thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277546,277546#msg-277546 From alexander.naumann at artcom-venture.de Tue Nov 28 16:34:40 2017 From: alexander.naumann at artcom-venture.de (Alexander Naumann) Date: Tue, 28 Nov 2017 17:34:40 +0100 (CET) Subject: domain only reachable with https:// in front In-Reply-To: References: Message-ID: <1806679732.4879081.1511886880173.JavaMail.zimbra@artcom-venture.de> Hi, you have : if ($scheme != "https") { return 301 https://$host$request_uri; } # managed by Certbot in your config, that redirects everything to https. Mit freundlichen Gr??en / best regards Alexander Naumann artcom venture GmbH ----- Urspr?ngliche Mail ----- Von: "pstnta" An: nginx at nginx.org Gesendet: Dienstag, 28. November 2017 17:27:57 Betreff: domain only reachable with https:// in front Hi, I'm using nginx as reverse proxy for guacamole, I can only reach my domain with https://pstn.host or https://www.pstn.host, it won't work without https or with even with https. here's my sites-enabled/pstn.host https://pastebin.com/raw/dKiEi72q any ideas what's wrong or missing? thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277546,277546#msg-277546 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Nov 28 16:40:42 2017 From: nginx-forum at forum.nginx.org (pstnta) Date: Tue, 28 Nov 2017 11:40:42 -0500 Subject: domain only reachable with https:// in front In-Reply-To: <1806679732.4879081.1511886880173.JavaMail.zimbra@artcom-venture.de> References: <1806679732.4879081.1511886880173.JavaMail.zimbra@artcom-venture.de> Message-ID: <1523f7ed82741ffeef88c04fa72bc745.NginxMailingListEnglish@forum.nginx.org> hi, thanks for answering, shouldn't that forward everything to https? so shouldn't it work with just pstn.host? instead of https://pstn.host Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277546,277548#msg-277548 From jeff.dyke at gmail.com Tue Nov 28 17:17:07 2017 From: jeff.dyke at gmail.com (Jeff Dyke) Date: Tue, 28 Nov 2017 12:17:07 -0500 Subject: domain only reachable with https:// in front In-Reply-To: <1523f7ed82741ffeef88c04fa72bc745.NginxMailingListEnglish@forum.nginx.org> References: <1806679732.4879081.1511886880173.JavaMail.zimbra@artcom-venture.de> <1523f7ed82741ffeef88c04fa72bc745.NginxMailingListEnglish@forum.nginx.org> Message-ID: I think it is unfortunate that certbot does it this way, with an if statement, which i believe is evaluated in every request. I use something like the following (with your names): server { listen 80 default_server; listen [::]:80 default_server; server_name pstn.host www.pstn.host; return 301 https://$host$request_uri; } server { listen 443 ssl default_server; ssl_certificate /etc/letsencrypt/live/pstn.host/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/pstn.host/privkey.pem; ....reset of config } Not part of your question, but I also use the hooks in webroot mode, rather than nginx, for certbot, so it's never modifies my configuration, as the sites-enabled files are managed by a configuration management system across about 100 domains, some with special requirements. HTH, Jeff On Tue, Nov 28, 2017 at 11:40 AM, pstnta wrote: > hi, > > thanks for answering, > > shouldn't that forward everything to https? so shouldn't it work with just > pstn.host? instead of https://pstn.host > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,277546,277548#msg-277548 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sca at andreasschulze.de Tue Nov 28 17:33:18 2017 From: sca at andreasschulze.de (A. Schulze) Date: Tue, 28 Nov 2017 18:33:18 +0100 Subject: cts-submit In-Reply-To: <1511817688.944.16.camel@16bits.net> References: <1511817688.944.16.camel@16bits.net> Message-ID: Am 27.11.2017 um 22:21 schrieb ?ngel: > On 2017-11-26 at 14:17 +0100, A. Schulze wrote: >> Hello, >> >> experiments with nginx-ct ?) show that I need a tool to submit a certificate to some public logs. >> cts-submit ?) seems useful. But it require me to install php on every host :-/ >> >> I know there are also python implementations. but >> is anybody aware of an implementation in *plain posix shell + openssl* ? >> >> Andreas > > Doesn't your CA already submit them to the Certificate Transparency > logs? I think LE in my case does. But at the end of the day I need a simple program to fetch Signed Certificate Timestamp data from one/multiple logs. Installing php or go (even only for compiling) is inconvenient for me. Are there other ways to only /fetch/ signed certificate timestamp data? Andreas From r1ch+nginx at teamliquid.net Tue Nov 28 19:30:57 2017 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Tue, 28 Nov 2017 20:30:57 +0100 Subject: domain only reachable with https:// in front In-Reply-To: References: <1806679732.4879081.1511886880173.JavaMail.zimbra@artcom-venture.de> <1523f7ed82741ffeef88c04fa72bc745.NginxMailingListEnglish@forum.nginx.org> Message-ID: Your ISP is blocking port 80, so you cannot get redirected to HTTPS. http://www.dslreports.com/faq/11852 On Tue, Nov 28, 2017 at 6:17 PM, Jeff Dyke wrote: > I think it is unfortunate that certbot does it this way, with an if > statement, which i believe is evaluated in every request. I use something > like the following (with your names): > > server { > listen 80 default_server; > listen [::]:80 default_server; > server_name pstn.host www.pstn.host; > return 301 https://$host$request_uri; > } > > > server { > listen 443 ssl default_server; > ssl_certificate /etc/letsencrypt/live/pstn.host/fullchain.pem; > ssl_certificate_key /etc/letsencrypt/live/pstn.host/privkey.pem; > > ....reset of config > } > > Not part of your question, but I also use the hooks in webroot mode, > rather than nginx, for certbot, so it's never modifies my configuration, as > the sites-enabled files are managed by a configuration management system > across about 100 domains, some with special requirements. > > HTH, > Jeff > > On Tue, Nov 28, 2017 at 11:40 AM, pstnta > wrote: > >> hi, >> >> thanks for answering, >> >> shouldn't that forward everything to https? so shouldn't it work with just >> pstn.host? instead of https://pstn.host >> >> Posted at Nginx Forum: https://forum.nginx.org/read.p >> hp?2,277546,277548#msg-277548 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jason.Whittington at equifax.com Tue Nov 28 22:55:53 2017 From: Jason.Whittington at equifax.com (Jason Whittington) Date: Tue, 28 Nov 2017 22:55:53 +0000 Subject: [IE] Fwd: Problem with CAS on nginx configuration In-Reply-To: References: Message-ID: <995C5C9AD54A3C419AF1C20A8B6AB9A432A9F71C@STLEISEXCMBX3.eis.equifax.com> It looks to me like the problem is that siampapps.mp.crt is a weak certificate. F12 tools in google show that it uses SHA-1 which is obsolete. You should be able to fix this by generating a new cert using a more secure algorithm. [cid:image005.jpg at 01D36869.C05AFE80] From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of carlos maddaleno cuellar Sent: Tuesday, November 28, 2017 9:45 AM To: nginx at nginx.org Subject: [IE] Fwd: Problem with CAS on nginx configuration Hello! I wanted to know if some one could help me with a problem i have with my CAS, the problem is that i have a nginx that is responding to three diferent servers as a proxy, the thing is that i put the cas on one instance (server) but when it loads on siampapps(nginx) https://siamppapps.mp/cas it shows that is not navigating on a secure port as you can see [Im?genes integradas 2] but when i try directly on the ip of the server and the port it doesn't show any error [Im?genes integradas 1] this is my nginx configuration: --------------------------------------------------------------------------------------------------------------------- upstream nomina { server siampv4.mp:28080; } upstream siampv3.mp { server siampv3.mp:28083; } upstream siampv5.mp { server siampv5.mp:28080; } server { listen 443; client_max_body_size 8M; ssl on; ssl_certificate /etc/nginx/siampapps.mp.crt; # path to your cacert.pem ssl_certificate_key /etc/nginx/siampapps.mp.key; # path to your privkey.pem server_name test.mp; # ...... fastcgi_param HTTPS on; fastcgi_param HTTP_SCHEME https; #location / { # root /usr/share/nginx/html; # index index.html index.htm; # } location /nomina { proxy_pass http://nomina; } location / { proxy_pass http://siampv3.mp; } location /mailer { proxy_pass http://siampv5.mp; } location /cas { proxy_pass http://siampv5.mp; } } thanks a lot!! This message contains proprietary information from Equifax which may be confidential. If you are not an intended recipient, please refrain from any disclosure, copying, distribution or use of this information and note that such actions are prohibited. If you have received this transmission in error, please notify by e-mail postmaster at equifax.com. Equifax? is a registered trademark of Equifax Inc. All rights reserved. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.jpg Type: image/jpeg Size: 24029 bytes Desc: image005.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.png Type: image/png Size: 50730 bytes Desc: image006.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image007.png Type: image/png Size: 42075 bytes Desc: image007.png URL: From nginx-forum at forum.nginx.org Tue Nov 28 23:00:45 2017 From: nginx-forum at forum.nginx.org (pstnta) Date: Tue, 28 Nov 2017 18:00:45 -0500 Subject: domain only reachable with https:// in front In-Reply-To: References: Message-ID: ahhh that's right, thanks for all your help guys ! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277546,277561#msg-277561 From michael.ottoson at cri.com Wed Nov 29 04:27:37 2017 From: michael.ottoson at cri.com (Michael Ottoson) Date: Wed, 29 Nov 2017 04:27:37 +0000 Subject: Moving SSL termination to the edge increased the instance of 502 errors Message-ID: Hi All, We installed nginx as load balancer/failover in front of two upstream web servers. At first SSL terminated at the web servers and nginx was configured as TCP passthrough on 443. We rarely experiences 502s and when it did it was likely due to tuning/tweaking. About a week ago we moved SSL termination to the edge. Since then we've been getting daily 502s. A small percentage - never reaching 1%. But with ? million requests per day, we are starting to get complaints. Stranger: the percentage seems to be rising. I have more details and a pretty picture here: https://serverfault.com/questions/885638/moving-ssl-termination-to-the-edge-increased-the-instance-of-502-errors Any advice how to squash those 502s? Should I be worried nginx is leaking? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Nov 29 10:11:39 2017 From: nginx-forum at forum.nginx.org (GazCBG) Date: Wed, 29 Nov 2017 05:11:39 -0500 Subject: Installing module in Nginx with souce, after installing from Repositiory Message-ID: <9747254bc2cad4d22ade492a8d0d6acb.NginxMailingListEnglish@forum.nginx.org> I have installed the mainline version of Nginx using the Lauchpad PPA Respositiy; Via: add-apt-repository ppa:nginx/development I wouldlike to install an additonal module into it, however I need to install it from source. If I was to download the correct version from the Nginx site to my server and install the module via source, will this work ok, or will it mess up what had already been installed? Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277563,277563#msg-277563 From mdounin at mdounin.ru Wed Nov 29 12:42:56 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 29 Nov 2017 15:42:56 +0300 Subject: Moving SSL termination to the edge increased the instance of 502 errors In-Reply-To: References: Message-ID: <20171129124256.GI78325@mdounin.ru> Hello! On Wed, Nov 29, 2017 at 04:27:37AM +0000, Michael Ottoson wrote: > Hi All, > > We installed nginx as load balancer/failover in front of two upstream web servers. > > At first SSL terminated at the web servers and nginx was configured as TCP passthrough on 443. > > We rarely experiences 502s and when it did it was likely due to tuning/tweaking. > > About a week ago we moved SSL termination to the edge. Since then we've been getting daily 502s. A small percentage - never reaching 1%. But with ? million requests per day, we are starting to get complaints. > > Stranger: the percentage seems to be rising. > > I have more details and a pretty picture here: > > https://serverfault.com/questions/885638/moving-ssl-termination-to-the-edge-increased-the-instance-of-502-errors > > > Any advice how to squash those 502s? Should I be worried nginx is leaking? First of all, you have to find the reason for these 502 errors. Looking into the error log is a good start. As per provided serverfault question, you see "no live upstreams" errors in logs. These errors mean that all configured upstream servers were disabled due to previous errors (see http://nginx.org/en/docs/http/ngx_http_upstream_module.html#max_fails), that is, these errors are just a result of previous errors. You have to find out real errors, they should be in the error log too. -- Maxim Dounin http://mdounin.ru/ From michael.ottoson at cri.com Wed Nov 29 13:05:17 2017 From: michael.ottoson at cri.com (Michael Ottoson) Date: Wed, 29 Nov 2017 13:05:17 +0000 Subject: Moving SSL termination to the edge increased the instance of 502 errors In-Reply-To: <20171129124256.GI78325@mdounin.ru> References: <20171129124256.GI78325@mdounin.ru> Message-ID: Thanks, Maxim. That makes a lot of sense. However, the problem started at exactly the same time we moved SSL termination. There were no changes to the application. It is unlikely to be a mere coincidence - but it could be. We were previously using HAPROXY for load balancing (well, the company we inherited this from did) and the same happened when they tried moving SSL termination. There is a reply to my question on serverfault, suggesting increasing keepalives (https://www.nginx.com/blog/load-balancing-with-nginx-plus-part2/#keepalive). This is because moving SSL increases the number of TCP connects. I'll give that a try and report back. -----Original Message----- From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of Maxim Dounin Sent: Wednesday, November 29, 2017 7:43 AM To: nginx at nginx.org Subject: Re: Moving SSL termination to the edge increased the instance of 502 errors Hello! On Wed, Nov 29, 2017 at 04:27:37AM +0000, Michael Ottoson wrote: > Hi All, > > We installed nginx as load balancer/failover in front of two upstream web servers. > > At first SSL terminated at the web servers and nginx was configured as TCP passthrough on 443. > > We rarely experiences 502s and when it did it was likely due to tuning/tweaking. > > About a week ago we moved SSL termination to the edge. Since then we've been getting daily 502s. A small percentage - never reaching 1%. But with ? million requests per day, we are starting to get complaints. > > Stranger: the percentage seems to be rising. > > I have more details and a pretty picture here: > > https://serverfault.com/questions/885638/moving-ssl-termination-to-the > -edge-increased-the-instance-of-502-errors > > > Any advice how to squash those 502s? Should I be worried nginx is leaking? First of all, you have to find the reason for these 502 errors. Looking into the error log is a good start. As per provided serverfault question, you see "no live upstreams" errors in logs. These errors mean that all configured upstream servers were disabled due to previous errors (see http://nginx.org/en/docs/http/ngx_http_upstream_module.html#max_fails), that is, these errors are just a result of previous errors. You have to find out real errors, they should be in the error log too. -- Maxim Dounin http://mdounin.ru/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From drew at drewnturner.com Wed Nov 29 14:17:01 2017 From: drew at drewnturner.com (Drew Turner) Date: Wed, 29 Nov 2017 08:17:01 -0600 Subject: Moving SSL termination to the edge increased the instance of 502 errors In-Reply-To: References: <20171129124256.GI78325@mdounin.ru> Message-ID: What's the backend for this IIS, NGINX, Apache, etc? Is it requiring SNI? Do you have multiple hostnames? On Wed, Nov 29, 2017 at 7:05 AM, Michael Ottoson wrote: > Thanks, Maxim. > > That makes a lot of sense. However, the problem started at exactly the > same time we moved SSL termination. There were no changes to the > application. It is unlikely to be a mere coincidence - but it could be. > > We were previously using HAPROXY for load balancing (well, the company we > inherited this from did) and the same happened when they tried moving SSL > termination. > > There is a reply to my question on serverfault, suggesting increasing > keepalives (https://www.nginx.com/blog/load-balancing-with-nginx- > plus-part2/#keepalive). This is because moving SSL increases the number > of TCP connects. I'll give that a try and report back. > > -----Original Message----- > From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of Maxim Dounin > Sent: Wednesday, November 29, 2017 7:43 AM > To: nginx at nginx.org > Subject: Re: Moving SSL termination to the edge increased the instance of > 502 errors > > Hello! > > On Wed, Nov 29, 2017 at 04:27:37AM +0000, Michael Ottoson wrote: > > > Hi All, > > > > We installed nginx as load balancer/failover in front of two upstream > web servers. > > > > At first SSL terminated at the web servers and nginx was configured as > TCP passthrough on 443. > > > > We rarely experiences 502s and when it did it was likely due to > tuning/tweaking. > > > > About a week ago we moved SSL termination to the edge. Since then we've > been getting daily 502s. A small percentage - never reaching 1%. But with > ? million requests per day, we are starting to get complaints. > > > > Stranger: the percentage seems to be rising. > > > > I have more details and a pretty picture here: > > > > https://serverfault.com/questions/885638/moving-ssl-termination-to-the > > -edge-increased-the-instance-of-502-errors > > > > > > Any advice how to squash those 502s? Should I be worried nginx is > leaking? > > First of all, you have to find the reason for these 502 errors. > Looking into the error log is a good start. > > As per provided serverfault question, you see "no live upstreams" > errors in logs. These errors mean that all configured upstream servers > were disabled due to previous errors (see http://nginx.org/en/docs/http/ > ngx_http_upstream_module.html#max_fails), > that is, these errors are just a result of previous errors. You have to > find out real errors, they should be in the error log too. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Nov 29 16:49:06 2017 From: nginx-forum at forum.nginx.org (AntoUX) Date: Wed, 29 Nov 2017 11:49:06 -0500 Subject: Hide a request cookie in proxy_pass In-Reply-To: References: <20140829172725.GQ1849@mdounin.ru> Message-ID: <750de23062f94d07d3f77905de8d8549.NginxMailingListEnglish@forum.nginx.org> Hello, I've found strange behaviour with this rewrite method. When : - there are space (%20) in the URI And - a cookie match regexp (and is removed) Nginx replace ";" and " " in Cookie header with %3B%20 For example: I want to remove "Testy" cookie. Here is nginx sample config : server { set $new_cookie $http_cookie; if ($http_cookie ~ "(.*)(?:^|;)\s*Testy=[^;]+(.*)") { set $new_cookie $1$2; } if ($new_cookie ~ "^[;]+(.*)") { set $new_cookie $1; } proxy_set_header Cookie $new_cookie; proxy_pass http://www.com_backend; } upstream www.com_backend { server localhost:9020; keepalive 30; } With this request : GET /api/TEST%20TEST HTTP/1.1 Cookie: country_code=FR; session=IntcI; lastvisitfor=IjIwMT%3D; Testy=uid%08474524469%26fst%3D15118; teaser=eyJ0eXBl2; popin=eyJib3R0 Nginx remove correctly Testy cookie but forward this cookie header to backend: Cookie: country_code=FR%3B%20session=IntcI%3B%20lastvisitfor=IjIwMT%3D%3B%20teaser=eyJ0eXBl2%3B%20popin=eyJib3R0 Due to the fact there are no "; " anymore, backend consider there is only one big cookie : "country_code". nginx version: nginx/1.12.1 OS : CentOS 6.9 Any ideas on how to fix it ? Thanks. Anto Posted at Nginx Forum: https://forum.nginx.org/read.php?2,252944,277574#msg-277574 From peter_booth at me.com Wed Nov 29 18:12:23 2017 From: peter_booth at me.com (Peter Booth) Date: Wed, 29 Nov 2017 13:12:23 -0500 Subject: Moving SSL termination to the edge increased the instance of 502 errors In-Reply-To: References: <20171129124256.GI78325@mdounin.ru> Message-ID: <3E53EB12-A789-48BD-902C-1FA77AB2C9DC@me.com> There are many things that *could* cause what you?re seeing - say at least eight. You might be lucky and guess the right one- but probably smarter to see exactly what the issue is. Presumably you changed your upstream webservers to do this work, replacing ssl with unencrypted connections? Do you have sar data showing #tcp connections before and after the change? Perhaps every request is negotiating SSL now? What if you add another nginx instance that doesn?t use ssl at all (just as a test) - does that also have 502s?. You probably have data you need to isolate Sent from my iPhone > On Nov 29, 2017, at 8:05 AM, Michael Ottoson wrote: > > Thanks, Maxim. > > That makes a lot of sense. However, the problem started at exactly the same time we moved SSL termination. There were no changes to the application. It is unlikely to be a mere coincidence - but it could be. > > We were previously using HAPROXY for load balancing (well, the company we inherited this from did) and the same happened when they tried moving SSL termination. > > There is a reply to my question on serverfault, suggesting increasing keepalives (https://www.nginx.com/blog/load-balancing-with-nginx-plus-part2/#keepalive). This is because moving SSL increases the number of TCP connects. I'll give that a try and report back. > > -----Original Message----- > From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of Maxim Dounin > Sent: Wednesday, November 29, 2017 7:43 AM > To: nginx at nginx.org > Subject: Re: Moving SSL termination to the edge increased the instance of 502 errors > > Hello! > >> On Wed, Nov 29, 2017 at 04:27:37AM +0000, Michael Ottoson wrote: >> >> Hi All, >> >> We installed nginx as load balancer/failover in front of two upstream web servers. >> >> At first SSL terminated at the web servers and nginx was configured as TCP passthrough on 443. >> >> We rarely experiences 502s and when it did it was likely due to tuning/tweaking. >> >> About a week ago we moved SSL termination to the edge. Since then we've been getting daily 502s. A small percentage - never reaching 1%. But with ? million requests per day, we are starting to get complaints. >> >> Stranger: the percentage seems to be rising. >> >> I have more details and a pretty picture here: >> >> https://serverfault.com/questions/885638/moving-ssl-termination-to-the >> -edge-increased-the-instance-of-502-errors >> >> >> Any advice how to squash those 502s? Should I be worried nginx is leaking? > > First of all, you have to find the reason for these 502 errors. > Looking into the error log is a good start. > > As per provided serverfault question, you see "no live upstreams" > errors in logs. These errors mean that all configured upstream servers were disabled due to previous errors (see http://nginx.org/en/docs/http/ngx_http_upstream_module.html#max_fails), > that is, these errors are just a result of previous errors. You have to find out real errors, they should be in the error log too. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mintern at everlaw.com Thu Nov 30 05:28:24 2017 From: mintern at everlaw.com (Brandon Mintern) Date: Thu, 30 Nov 2017 00:28:24 -0500 Subject: Overridable header values (with map?) Message-ID: We're using nginx for several different types of servers, but we're trying to unify the configuration to minimize shared code. One stumbling block is headers. For most requests, we want to add a set of standard headers: # headers.conf: add_header Cache-Control $cache_control; add_header X-Robots-Tag $robots_tag always; add_header X-Frame-Options $frame_options; add_header X-XSS-Protection "1; mode=block"; add_header X-Content-Type-Options nosniff; # several more... Many of the headers are the same for all requests, but the first three are tweaked for specific resources or target servers. The first approach I took was to define two files: # header-vars.conf: # Values for the $cache_control header. By default, we use $one_day. set $no_cache "max-age=0, no-store, no-cache, must-revalidate"; set $one_day "public, max-age=86400"; set $one_year "public, max-age=31536000"; set $cache_control $one_day; # To allow robots, override this variable using `set $robots_tag all;`. set $robots_tag "noindex, nofollow, nosnippet, noarchive"; set $frame_options "SAMEORIGIN"; ...and the headers.conf above. Then, at appropriate contexts (either a server or location block), different servers would include the files as follows: include header-vars.conf; include headers.conf; That would give them all of our defaults. If the specific application or context needs to tweak the caching and robots, it might do something like this: include header-vars.conf; set $cache_control $no_cache; set $robots_tag all; include headers.conf; This was fine, but I recently came across an interesting use of map that I thought I could generalize to simplify this pattern. My idea was to do something like: # header-vars.conf: map $robots $robots_tag { # Disallowed default "noindex, nofollow, nosnippet, noarchive"; off "noindex, nofollow, nosnippet, noarchive"; # Allowed on all; } map $frames $frame_options { # Allow in frames only on from the same origin (URL). default "SAMEORIGIN"; # This isn't a real value, but it will cause the header to be ignored. allow "ALLOW"; } map $cache $cache_control { # no caching off "max-age=0, no-store, no-cache, must-revalidate"; # one day default "public, max-age=86400"; 1d "public, max-age=86400"; # one year 1y "public, max-age=31536000"; } I thought this would allow me to include both header-vars.conf and headers.conf in the http block. Then, within the server or location blocks, I wouldn't have to do anything to get the defaults. Or, to tweak robots and caching: set $cache off; set $robots on; Since the variables wouldn't be evaluated till the headers were actually added, I thought this would work well and simplify things a lot. Unfortunately, I was mistaken that I would be able to use an undefined variable in the first position of a map directive (I thought it would just be empty): unknown "robots" variable Of course, I can't set a default value for that variable since I'm including header-vars.conf at the http level. I'd rather not need to include defaults in every server (there are many). Does anyone have any suggestions for how I can better solve this problem? Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From lagged at gmail.com Thu Nov 30 06:07:40 2017 From: lagged at gmail.com (Andrei) Date: Thu, 30 Nov 2017 00:07:40 -0600 Subject: Migrating from Varnish In-Reply-To: References: Message-ID: Would it be possible to use the Redis module to track cache? For example, I would like to log each "new" cache hit, and include the URL, cache expiration time, and possibly the file it's stored in? On Nov 23, 2017 23:51, "itpp2012" wrote: > Andrei Wrote: > ------------------------------------------------------- > > Thanks for the tip. Have you ran into any issues as Maxim mentioned? > > > > Not yet. > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,277462,277487#msg-277487 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas at glanzmann.de Thu Nov 30 06:08:27 2017 From: thomas at glanzmann.de (Thomas Glanzmann) Date: Thu, 30 Nov 2017 07:08:27 +0100 Subject: Set Expires Header only if upstream has not already set an Expires In-Reply-To: <20171122232633.GM3127@daoine.org> References: <20171122094149.GD3702@glanzmann.de> <20171122232633.GM3127@daoine.org> Message-ID: <20171130060827.GA5732@glanzmann.de> Hello francis, > > Howto set expires only if upstream does not have set an expires? > * Francis Daly [2017-11-23 00:26]: > You can set a value based on $upstream_http_expires -- > { default off; "" 7d; } > in the appropriate "map" should set your Expires time to 7 days from > now if there is not an Expires: header from the upstream. thanks a lot. That solved my problem. I used the same: map $upstream_http_expires $expires { default off; "" 7d; } server { ... expires $expires; } Works like a charm. Thank you again for solving my problem. I thought about using a map but missed the 'off' possibility and its behaviour. Cheers, Thomas From tongshushan at migu.cn Thu Nov 30 09:12:18 2017 From: tongshushan at migu.cn (tongshushan at migu.cn) Date: Thu, 30 Nov 2017 17:12:18 +0800 Subject: How to control the total requests in Ngnix Message-ID: <2017113017121771024321@migu.cn> Hi guys, I want to use ngnix to protect my system,to allow max 2000 requests sent to my service(http location). The below configs are only for per client ip,not for the total requests control. ##########method 1########## limit_conn_zone $binary_remote_addr zone=addr:10m; server { location /mylocation/ { limit_conn addr 2; proxy_pass http://my_server/mylocation/; proxy_set_header Host $host:$server_port; } } ##########method 2########## limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s; server { location /mylocation/ { limit_req zone=one burst=5 nodelay; proxy_pass http://my_server/mylocation/; proxy_set_header Host $host:$server_port; } } How can I do it? Tong -------------- next part -------------- An HTML attachment was scrubbed... URL: From tongshushan at migu.cn Thu Nov 30 09:14:33 2017 From: tongshushan at migu.cn (tongshushan at migu.cn) Date: Thu, 30 Nov 2017 17:14:33 +0800 Subject: =?UTF-8?Q?=E5=9B=9E=E5=A4=8D=3A_How_to_control_the_total_requests_in_Ngnix?= References: <2017113017121771024321@migu.cn> Message-ID: <2017113017143383109823@migu.cn> Additional: the total requests will be sent from different client ips. Tong ???? tongshushan at migu.cn ????? 2017-11-30 17:12 ???? nginx ??? How to control the total requests in Ngnix Hi guys, I want to use ngnix to protect my system,to allow max 2000 requests sent to my service(http location). The below configs are only for per client ip,not for the total requests control. ##########method 1########## limit_conn_zone $binary_remote_addr zone=addr:10m; server { location /mylocation/ { limit_conn addr 2; proxy_pass http://my_server/mylocation/; proxy_set_header Host $host:$server_port; } } ##########method 2########## limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s; server { location /mylocation/ { limit_req zone=one burst=5 nodelay; proxy_pass http://my_server/mylocation/; proxy_set_header Host $host:$server_port; } } How can I do it? Tong -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at lazygranch.com Thu Nov 30 09:44:42 2017 From: lists at lazygranch.com (Gary) Date: Thu, 30 Nov 2017 01:44:42 -0800 Subject: =?UTF-8?Q?Re=3A_=E5=9B=9E=E5=A4=8D=3A_How_to_control_the_total_requests_in?= =?UTF-8?Q?_Ngnix?= In-Reply-To: <2017113017143383109823@migu.cn> Message-ID: An HTML attachment was scrubbed... URL: From francis at daoine.org Thu Nov 30 10:17:07 2017 From: francis at daoine.org (Francis Daly) Date: Thu, 30 Nov 2017 10:17:07 +0000 Subject: How to control the total requests in Ngnix In-Reply-To: <2017113017121771024321@migu.cn> References: <2017113017121771024321@migu.cn> Message-ID: <20171130101707.GN3127@daoine.org> On Thu, Nov 30, 2017 at 05:12:18PM +0800, tongshushan at migu.cn wrote: Hi there, > I want to use ngnix to protect my system,to allow max 2000 requests sent to my service(http location). > The below configs are only for per client ip,not for the total requests control. > ##########method 1########## > > limit_conn_zone $binary_remote_addr zone=addr:10m; http://nginx.org/r/limit_conn_zone If "key" is "$binary_remote_addr", it will be the same for the same client ip, and different for different client ips; the limits apply to each individual value of client ip (strictly: to each individual value of "key"). If "key" is (for example) "fixed", it will be the same for every connection, and so the limits will apply for all connections. Note: that limits concurrent connections, not requests. > ##########method 2########## > > limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s; http://nginx.org/r/limit_req_zone Again, set "key" to something that is the same for all requests, and the limit will apply to all requests. f -- Francis Daly francis at daoine.org From anoopalias01 at gmail.com Thu Nov 30 11:08:33 2017 From: anoopalias01 at gmail.com (Anoop Alias) Date: Thu, 30 Nov 2017 16:38:33 +0530 Subject: Moving SSL termination to the edge increased the instance of 502 errors In-Reply-To: <3E53EB12-A789-48BD-902C-1FA77AB2C9DC@me.com> References: <20171129124256.GI78325@mdounin.ru> <3E53EB12-A789-48BD-902C-1FA77AB2C9DC@me.com> Message-ID: Since the upstream now has changed tcp ports - do check if it is a firewall/network buffer etc issue too on the new port. On Wed, Nov 29, 2017 at 11:42 PM, Peter Booth wrote: > There are many things that *could* cause what you?re seeing - say at least > eight. You might be lucky and guess the right one- but probably smarter to > see exactly what the issue is. > > Presumably you changed your upstream webservers to do this work, replacing > ssl with unencrypted connections? Do you have sar data showing #tcp > connections before and after the change? Perhaps every request is > negotiating SSL now? > What if you add another nginx instance that doesn?t use ssl at all (just > as a test) - does that also have 502s?. You probably have data you need to > isolate > > Sent from my iPhone > > > On Nov 29, 2017, at 8:05 AM, Michael Ottoson > wrote: > > > > Thanks, Maxim. > > > > That makes a lot of sense. However, the problem started at exactly the > same time we moved SSL termination. There were no changes to the > application. It is unlikely to be a mere coincidence - but it could be. > > > > We were previously using HAPROXY for load balancing (well, the company > we inherited this from did) and the same happened when they tried moving > SSL termination. > > > > There is a reply to my question on serverfault, suggesting increasing > keepalives (https://www.nginx.com/blog/load-balancing-with-nginx- > plus-part2/#keepalive). This is because moving SSL increases the number > of TCP connects. I'll give that a try and report back. > > > > -----Original Message----- > > From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of Maxim Dounin > > Sent: Wednesday, November 29, 2017 7:43 AM > > To: nginx at nginx.org > > Subject: Re: Moving SSL termination to the edge increased the instance > of 502 errors > > > > Hello! > > > >> On Wed, Nov 29, 2017 at 04:27:37AM +0000, Michael Ottoson wrote: > >> > >> Hi All, > >> > >> We installed nginx as load balancer/failover in front of two upstream > web servers. > >> > >> At first SSL terminated at the web servers and nginx was configured as > TCP passthrough on 443. > >> > >> We rarely experiences 502s and when it did it was likely due to > tuning/tweaking. > >> > >> About a week ago we moved SSL termination to the edge. Since then > we've been getting daily 502s. A small percentage - never reaching 1%. > But with ? million requests per day, we are starting to get complaints. > >> > >> Stranger: the percentage seems to be rising. > >> > >> I have more details and a pretty picture here: > >> > >> https://serverfault.com/questions/885638/moving-ssl-termination-to-the > >> -edge-increased-the-instance-of-502-errors > >> > >> > >> Any advice how to squash those 502s? Should I be worried nginx is > leaking? > > > > First of all, you have to find the reason for these 502 errors. > > Looking into the error log is a good start. > > > > As per provided serverfault question, you see "no live upstreams" > > errors in logs. These errors mean that all configured upstream servers > were disabled due to previous errors (see http://nginx.org/en/docs/http/ > ngx_http_upstream_module.html#max_fails), > > that is, these errors are just a result of previous errors. You have to > find out real errors, they should be in the error log too. > > > > -- > > Maxim Dounin > > http://mdounin.ru/ > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From tongshushan at migu.cn Thu Nov 30 11:52:58 2017 From: tongshushan at migu.cn (tongshushan at migu.cn) Date: Thu, 30 Nov 2017 19:52:58 +0800 Subject: How to control the total requests in Ngnix References: <20171130094457.DF7902C56ACF@mail.nginx.com> Message-ID: <2017113019525810745325@migu.cn> a limit of two connections per address is just a example. What does 2000 requests mean? Is that per second? yes?it's QPS. ??? ?????????? ??? Mobile?13818663262 Telephone?021-51856688(81275) Email?tongshushan at migu.cn ???? Gary ????? 2017-11-30 17:44 ???? nginx ??? Re: ??: How to control the total requests in Ngnix I think a limit of two connections per address is too low. I know that tip pages suggest a low limit in so-called anti-DDOS (really just flood protection). Some large carriers can generate 30+ connections per IP, probably because they lack sufficient IPV4 address space for their millions of users. This is based on my logs. I used to have a limit of 10 and it was reached quite often just from corporate users. The 10 per second rate is fine, and probably about as low as you should go. What does 2000 requests mean? Is that per second? From: tongshushan at migu.cn Sent: November 30, 2017 1:14 AM To: nginx at nginx.org Reply-to: nginx at nginx.org Subject: ??: How to control the total requests in Ngnix Additional: the total requests will be sent from different client ips. Tong ???? tongshushan at migu.cn ????? 2017-11-30 17:12 ???? nginx ??? How to control the total requests in Ngnix Hi guys, I want to use ngnix to protect my system,to allow max 2000 requests sent to my service(http location). The below configs are only for per client ip,not for the total requests control. ##########method 1########## limit_conn_zone $binary_remote_addr zone=addr:10m; server { location /mylocation/ { limit_conn addr 2; proxy_pass http://my_server/mylocation/; proxy_set_header Host $host:$server_port; } } ##########method 2########## limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s; server { location /mylocation/ { limit_req zone=one burst=5 nodelay; proxy_pass http://my_server/mylocation/; proxy_set_header Host $host:$server_port; } } How can I do it? Tong -------------- next part -------------- An HTML attachment was scrubbed... URL: From tongshushan at migu.cn Thu Nov 30 12:04:41 2017 From: tongshushan at migu.cn (tongshushan at migu.cn) Date: Thu, 30 Nov 2017 20:04:41 +0800 Subject: How to control the total requests in Ngnix References: <2017113017121771024321@migu.cn>, <20171130101707.GN3127@daoine.org> Message-ID: <2017113020044137777429@migu.cn> Francis, what is the same "key " for all requests from different client ips for limit_conn_zone/limit_req_zone? I have no idea on this. Tong From: Francis Daly Date: 2017-11-30 18:17 To: nginx Subject: Re: How to control the total requests in Ngnix On Thu, Nov 30, 2017 at 05:12:18PM +0800, tongshushan at migu.cn wrote: Hi there, > I want to use ngnix to protect my system,to allow max 2000 requests sent to my service(http location). > The below configs are only for per client ip,not for the total requests control. > ##########method 1########## > > limit_conn_zone $binary_remote_addr zone=addr:10m; http://nginx.org/r/limit_conn_zone If "key" is "$binary_remote_addr", it will be the same for the same client ip, and different for different client ips; the limits apply to each individual value of client ip (strictly: to each individual value of "key"). If "key" is (for example) "fixed", it will be the same for every connection, and so the limits will apply for all connections. Note: that limits concurrent connections, not requests. > ##########method 2########## > > limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s; http://nginx.org/r/limit_req_zone Again, set "key" to something that is the same for all requests, and the limit will apply to all requests. f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Nov 30 14:45:07 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 30 Nov 2017 17:45:07 +0300 Subject: Overridable header values (with map?) In-Reply-To: References: Message-ID: <20171130144507.GM78325@mdounin.ru> Hello! On Thu, Nov 30, 2017 at 12:28:24AM -0500, Brandon Mintern wrote: [...] > This was fine, but I recently came across an interesting use of map > that I thought I could generalize > to simplify this pattern. My idea was to do something like: > > # header-vars.conf: > > map $robots $robots_tag { > > # Disallowed > default "noindex, nofollow, nosnippet, noarchive"; > off "noindex, nofollow, nosnippet, noarchive"; > > # Allowed > on all; > } [...] > Unfortunately, I was mistaken that I would be able to use an undefined > variable in the first position of a map directive (I thought it would just > be empty): > > unknown "robots" variable > > Of course, I can't set a default value for that variable since I'm > including header-vars.conf at the http level. I'd rather not need to > include defaults in every server (there are many). > > Does anyone have any suggestions for how I can better solve this problem? The error in question will only appear if you don't have the variable defined at all, that is, it is not used anywhere in your configuration. Using it at least somewhere will resolve the error. That is, just add something like set $robots off; anywhere in your configuration as appopriate (for example, in the default server{} block). Once you will be able to start nginx, you'll start getting warnings when the variable is used uninitialized, e.g.: ... [warn] ... using uninitialized "robots" variable ... These warnings can be switched off using the uninitialized_variable_warn directive, see http://nginx.org/r/uninitialized_variable_warn. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Thu Nov 30 17:20:19 2017 From: nginx-forum at forum.nginx.org (traquila) Date: Thu, 30 Nov 2017 12:20:19 -0500 Subject: Multiple Cache Manager Processes or Threads Message-ID: <65b505d5e2a11bf67672ab44da47e4df.NginxMailingListEnglish@forum.nginx.org> Hello, I have an issue with the cache manager and the way I use it. When I configure 2 different caches zones, one very huge and one very fast, the cache manager can't delete files quickly enough and lead to a partition full. For example: proxy_cache_path /mnt/hdd/cache levels=1:2:2 keys_zone=cache_hdd:40g max_size=40000g inactive=5d; proxy_cache_path /mnt/ram/cache levels=1:2 keys_zone=cache_ram:300m max_size=300g inactive=1h; On the beginning, ram cache is correctly purge around 40GB (+/- Input bandwidth*10sec) , but when the hdd cache begins to fill up, ram cache growing over 50GB. I think the cache manager is stuck by the slowness of the filesystem / hardware. I can fix this by using 2 nginx on the same machine, one configured as ram cache, the other as hdd cache; but I wonder if it would be possible to create a cache manager process for each proxy_cache_path directive. Thank in advance. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277597,277597#msg-277597 From nginx-forum at forum.nginx.org Thu Nov 30 17:25:52 2017 From: nginx-forum at forum.nginx.org (traquila) Date: Thu, 30 Nov 2017 12:25:52 -0500 Subject: Multiple Cache Manager Processes or Threads In-Reply-To: <65b505d5e2a11bf67672ab44da47e4df.NginxMailingListEnglish@forum.nginx.org> References: <65b505d5e2a11bf67672ab44da47e4df.NginxMailingListEnglish@forum.nginx.org> Message-ID: <05ba9c53551fdbba617d3af2c8126e42.NginxMailingListEnglish@forum.nginx.org> Sorry, I gave wrong values: On the beginning, ram cache is correctly purge around 300GB (+/- Input bandwidth*10sec) , but when the hdd cache begins to fill up, ram cache growing over 320GB. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277597,277598#msg-277598 From nginx-forum at forum.nginx.org Thu Nov 30 17:46:54 2017 From: nginx-forum at forum.nginx.org (dmc) Date: Thu, 30 Nov 2017 12:46:54 -0500 Subject: Plan to support proxy protocol v2? Message-ID: Hi, The aws elbv2 just works with proxy protocol v2. Is there any plan to support this version in nginx soon? regards Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277599,277599#msg-277599 From mintern at everlaw.com Thu Nov 30 18:14:37 2017 From: mintern at everlaw.com (Brandon Mintern) Date: Thu, 30 Nov 2017 13:14:37 -0500 Subject: Overridable header values (with map?) In-Reply-To: <20171130144507.GM78325@mdounin.ru> References: <20171130144507.GM78325@mdounin.ru> Message-ID: On Thu, Nov 30, 2017 at 9:45 AM, Maxim Dounin wrote: > Hello! > Hi! The error in question will only appear if you don't have the > variable defined at all, that is, it is not used anywhere in your > configuration. Using it at least somewhere will resolve the > error. That is, just add something like > > set $robots off; > > anywhere in your configuration as appopriate (for example, in the > default server{} block). > > Once you will be able to start nginx, you'll start getting > warnings when the variable is used uninitialized, e.g.: > > ... [warn] ... using uninitialized "robots" variable ... > > These warnings can be switched off using the > uninitialized_variable_warn directive, see > http://nginx.org/r/uninitialized_variable_warn. That worked perfectly! Thank you very much! -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Nov 30 18:21:55 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 30 Nov 2017 21:21:55 +0300 Subject: Multiple Cache Manager Processes or Threads In-Reply-To: <65b505d5e2a11bf67672ab44da47e4df.NginxMailingListEnglish@forum.nginx.org> References: <65b505d5e2a11bf67672ab44da47e4df.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20171130182155.GP78325@mdounin.ru> Hello! On Thu, Nov 30, 2017 at 12:20:19PM -0500, traquila wrote: > I have an issue with the cache manager and the way I use it. > When I configure 2 different caches zones, one very huge and one very fast, > the cache manager can't delete files quickly enough and lead to a partition > full. > > For example: > proxy_cache_path /mnt/hdd/cache levels=1:2:2 keys_zone=cache_hdd:40g > max_size=40000g inactive=5d; > proxy_cache_path /mnt/ram/cache levels=1:2 keys_zone=cache_ram:300m > max_size=300g inactive=1h; > > On the beginning, ram cache is correctly purge around 40GB (+/- Input > bandwidth*10sec) , but when the hdd cache begins to fill up, ram cache > growing over 50GB. I think the cache manager is stuck by the slowness of the > filesystem / hardware. > > I can fix this by using 2 nginx on the same machine, one configured as ram > cache, the other as hdd cache; but I wonder if it would be possible to > create a cache manager process for each proxy_cache_path directive. Which nginx version you are using? With nginx 1.11.5+, there are manager_files / manager_sleep / manager_threshold parameters you may want to play with, see http://nginx.org/r/proxy_cache_path. These parameters allows limiting cache manager's work on a particular cache to some finite time, and therefore help to better maintain specified max_size of other caches. If you are using an older version, an upgrade to the recent version might help even without further tuning, as older versions do not limit cache manager's work on a particular cache at all. -- Maxim Dounin http://mdounin.ru/ From francis at daoine.org Thu Nov 30 18:38:32 2017 From: francis at daoine.org (Francis Daly) Date: Thu, 30 Nov 2017 18:38:32 +0000 Subject: How to control the total requests in Ngnix In-Reply-To: <2017113020044137777429@migu.cn> References: <2017113017121771024321@migu.cn> <20171130101707.GN3127@daoine.org> <2017113020044137777429@migu.cn> Message-ID: <20171130183832.GO3127@daoine.org> On Thu, Nov 30, 2017 at 08:04:41PM +0800, tongshushan at migu.cn wrote: Hi there, > what is the same "key " for all requests from different client ips for limit_conn_zone/limit_req_zone? I have no idea on this. Any $variable might be different in different connections. Any fixed string will not be. So: limit_conn_zone "all" zone=all... for example. f -- Francis Daly francis at daoine.org From lists at lazygranch.com Thu Nov 30 18:53:34 2017 From: lists at lazygranch.com (Gary) Date: Thu, 30 Nov 2017 10:53:34 -0800 Subject: =?UTF-8?Q?Re=3A_=E5=9B=9E=E5=A4=8D=3A_How_to_control_the_total_requests_in?= =?UTF-8?Q?_Ngnix?= In-Reply-To: <2017113017143383109823@migu.cn> Message-ID: An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Nov 30 19:02:27 2017 From: nginx-forum at forum.nginx.org (reverson) Date: Thu, 30 Nov 2017 14:02:27 -0500 Subject: Return 408 to ELB Message-ID: I am running into an issue, that I believe was documented here (https://trac.nginx.org/nginx/ticket/1005). Essentially, I am seeing alerts as our ELBs are sending 504s back to clients with no backend information attached, but when I look through our nginx request logs, I see that we "should have" sent them a 408. However, it appears that nginx is just closing the connection. We are using keep-alive connections, and I was looking at using the reset_timedout_connection parameter, but based on the documentation it doesn't seem like this will help. Is there a way to actually send a 408 back to the client using nginx and ELBs? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,277604,277604#msg-277604 From mdounin at mdounin.ru Thu Nov 30 19:32:15 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 30 Nov 2017 22:32:15 +0300 Subject: Return 408 to ELB In-Reply-To: References: Message-ID: <20171130193214.GR78325@mdounin.ru> Hello! On Thu, Nov 30, 2017 at 02:02:27PM -0500, reverson wrote: > I am running into an issue, that I believe was documented here > (https://trac.nginx.org/nginx/ticket/1005). > > Essentially, I am seeing alerts as our ELBs are sending 504s back to clients > with no backend information attached, but when I look through our nginx > request logs, I see that we "should have" sent them a 408. However, it > appears that nginx is just closing the connection. > > We are using keep-alive connections, and I was looking at using the > reset_timedout_connection parameter, but based on the documentation it > doesn't seem like this will help. Note that the only issue here is that the client sees 504 instead of 408. If these are real clients, you may want to use larger client_body_timeout and rely on the ELB timeouts instead. > Is there a way to actually send a 408 back to the client using nginx and > ELBs? No. -- Maxim Dounin http://mdounin.ru/ From peter_booth at me.com Thu Nov 30 22:25:31 2017 From: peter_booth at me.com (Peter Booth) Date: Thu, 30 Nov 2017 17:25:31 -0500 Subject: How to control the total requests in Ngnix In-Reply-To: <2017113019525810745325@migu.cn> References: <20171130094457.DF7902C56ACF@mail.nginx.com> <2017113019525810745325@migu.cn> Message-ID: So what exactly are you trying to protect against? Against ?bad people? or ?my website is busier than I think I can handle?? Sent from my iPhone > On Nov 30, 2017, at 6:52 AM, "tongshushan at migu.cn" wrote: > > a limit of two connections per address is just a example. > What does 2000 requests mean? Is that per second? yes?it's QPS. > > ??? > ?????????? ??? > Mobile?13818663262 > Telephone?021-51856688(81275) > Email?tongshushan at migu.cn > > ???? Gary > ????? 2017-11-30 17:44 > ???? nginx > ??? Re: ??: How to control the total requests in Ngnix > I think a limit of two connections per address is too low. I know that tip pages suggest a low limit in so-called anti-DDOS (really just flood protection). Some large carriers can generate 30+ connections per IP, probably because they lack sufficient IPV4 address space for their millions of users. This is based on my logs. I used to have a limit of 10 and it was reached quite often just from corporate users. > > The 10 per second rate is fine, and probably about as low as you should go. > > What does 2000 requests mean? Is that per second? > > > From: tongshushan at migu.cn > Sent: November 30, 2017 1:14 AM > To: nginx at nginx.org > Reply-to: nginx at nginx.org > Subject: ??: How to control the total requests in Ngnix > > Additional: the total requests will be sent from different client ips. > > Tong > > ???? tongshushan at migu.cn > ????? 2017-11-30 17:12 > ???? nginx > ??? How to control the total requests in Ngnix > Hi guys, > > I want to use ngnix to protect my system,to allow max 2000 requests sent to my service(http location). > The below configs are only for per client ip,not for the total requests control. > ##########method 1########## > > limit_conn_zone $binary_remote_addr zone=addr:10m; > server { > location /mylocation/ { > limit_conn addr 2; > proxy_pass http://my_server/mylocation/; > proxy_set_header Host $host:$server_port; > } > } > > ##########method 2########## > > limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s; > server { > location /mylocation/ { > limit_req zone=one burst=5 nodelay; > proxy_pass http://my_server/mylocation/; > proxy_set_header Host $host:$server_port; > } > } > > > > How can I do it? > > > Tong > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: