From nginx-forum at forum.nginx.org Sat May 1 04:38:30 2021 From: nginx-forum at forum.nginx.org (kbolino) Date: Sat, 01 May 2021 00:38:30 -0400 Subject: Revisiting 100-continue with unbuffered proxying Message-ID: Use case: Large uploads (hundreds of megabytes to tens of gigabytes) where nginx is serving as a reverse proxy and load balancer. The upstream servers can get bogged down, and when they do, they apply backpressure by responding with 503 status code. Problem: Naively implemented, the client sends the entire request body off to the server, then waits to find out that the server can't handle the request. Time and network bandwidth are wasted, and the client has to retry the request. Partial solution: Using an idempotent request method, with "proxy_request_buffering on", and "proxy_next_upstream http_503", nginx will accept the upload from the client once, but try each server in succession until one works. Fortunately, nginx will set header "Expect: 100-continue" on each proxied request and will not send the request body off to an upstream server that isn't ready to receive it. However, nginx won't even begin to send a proxied request to any upstream server until the initial request body upload from the client has completed. Also, the entire request body has to be stored somewhere local to nginx and the speed of that storage has a direct impact on the performance of the whole process. Next solution idea: Have the *client* set header "Expect: 100-continue". Then the client won't send the request body until nginx can find an upstream server to handle the request. However, this is not how things work today. Nginx will unconditionally accept the request with "100 Continue" regardless of upstream server status. With buffering enabled, this makes sense, since nginx wants to aggressively buffer the request body so it can re-send it if needed. Refined solution idea: Disable buffering. Unfortunately, while setting "proxy_request_buffering off" and "proxy_http_version 1.1" does disable buffering, it doesn't disable nginx from immediately telling the client "100 Continue". Moreover, nginx only tries one upstream server before giving up, probably because it has no buffered copy of the request body to send to the next server on behalf of the client. Yet if nginx delayed sending "100 Continue" back to the client, it could have taken a little bit more time to find a viable upstream server. I did some digging before bringing this topic up, and I find a proposed patch (http://mailman.nginx.org/pipermail/nginx-devel/2016-August/008736.html), a request in the forum (https://forum.nginx.org/read.php?2,212533,212533#msg-212533), and a trac ticket (https://trac.nginx.org/nginx/ticket/493) all to disable automatic handling of the 100-continue mechanism. The trac ticket was closed because unbuffered upload was not supported yet, the patch was rejected because it sounded like it was the other side's problem to solve, and finally the forum request was rejected because nginx was "designed as [an] accelerator to minimize backend interaction with a client". As to that last quoted part, I agree! I'd rather have nginx figure things out than to have the client finagle with the backend server too much. So here's what I think should happen. First, the client's Expect header should not get directly passed on to the upstream server nor should nginx ignore the header entirely (i.e., keep these things the same as they are today). Instead, with unbuffered upload, an upstream block with multiple servers, and proxy_next_upstream set to try another server when one fails: 1. Client sends request with "Expect: 100-continue" to nginx 2. Nginx receives request but does not respond with anything yet 3. Nginx tries the first eligible server by sending a proxied request with "Expect: 100-continue" (not passthrough; this is nginx's own logic and this part exists today as far as I can tell) 4. If the server responds "100 Continue" to nginx *then* nginx responds "100 Continue" to the client and the unbuffered upload proceeds to that server 5. If instead the server fails in a way that proxy_next_upstream is configured to handle, then nginx still doesn't respond to the client, and now tries to reach the next eligible server instead. 6. This process proceeds until a server willing to accept the request is found or all servers have been tried and none are available, at which point nginx sends an appropriate non-100 response to the client (502/503/504). (Caveat: If an upstream server fails *after* already accepting a request with "100 Continue", then nginx still has to give up since there's no buffering.) Thoughts? I know there are other ways to solve this problem (e.g. S3-style multipart uploads), but there is a convenience to the "Expect: 100-continue" mechanism and it is pretty widely supported. I don't think this goes against the grain of what nginx is trying to be, especially since unbuffered uploads are supported now. Thanks for your consideration, Kristian Bolino Posted at Nginx Forum: https://forum.nginx.org/read.php?2,291404,291404#msg-291404 From mdounin at mdounin.ru Sat May 1 07:30:09 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 1 May 2021 10:30:09 +0300 Subject: Revisiting 100-continue with unbuffered proxying In-Reply-To: References: Message-ID: Hello! On Sat, May 01, 2021 at 12:38:30AM -0400, kbolino wrote: > Use case: Large uploads (hundreds of megabytes to tens of gigabytes) where > nginx is serving as a reverse proxy and load balancer. The upstream servers > can get bogged down, and when they do, they apply backpressure by responding > with 503 status code. > > Problem: Naively implemented, the client sends the entire request body off > to the server, then waits to find out that the server can't handle the > request. Time and network bandwidth are wasted, and the client has to retry > the request. > > Partial solution: Using an idempotent request method, with > "proxy_request_buffering on", and "proxy_next_upstream http_503", nginx will > accept the upload from the client once, but try each server in succession > until one works. Fortunately, nginx will set header "Expect: 100-continue" > on each proxied request and will not send the request body off to an > upstream server that isn't ready to receive it. However, nginx won't even No, this is not how it works: nginx never use "Expect: 100-continue" on requests to backends. It is, however, smart enough to stop sending the body as long as the backend server responds with an error, so (almost) no bandwidth is wasted. This is, actually, what HTTP specification suggests that all clients should be doing (https://tools.ietf.org/html/rfc7230#section-6.5): A client sending a message body SHOULD monitor the network connection for an error response while it is transmitting the request. If the client sees a response that indicates the server does not wish to receive the message body and is closing the connection, the client SHOULD immediately cease transmitting the body and close its side of the connection. The most simple solution would be to fix the client to do the same. > begin to send a proxied request to any upstream server until the initial > request body upload from the client has completed. Also, the entire request > body has to be stored somewhere local to nginx and the speed of that storage > has a direct impact on the performance of the whole process. > > Next solution idea: Have the *client* set header "Expect: 100-continue". > Then the client won't send the request body until nginx can find an upstream > server to handle the request. However, this is not how things work today. > Nginx will unconditionally accept the request with "100 Continue" regardless > of upstream server status. With buffering enabled, this makes sense, since > nginx wants to aggressively buffer the request body so it can re-send it if > needed. > > Refined solution idea: Disable buffering. Unfortunately, while setting > "proxy_request_buffering off" and "proxy_http_version 1.1" does disable > buffering, it doesn't disable nginx from immediately telling the client "100 > Continue". Moreover, nginx only tries one upstream server before giving up, > probably because it has no buffered copy of the request body to send to the > next server on behalf of the client. Yet if nginx delayed sending "100 > Continue" back to the client, it could have taken a little bit more time to > find a viable upstream server. > > I did some digging before bringing this topic up, and I find a proposed > patch > (http://mailman.nginx.org/pipermail/nginx-devel/2016-August/008736.html), a > request in the forum > (https://forum.nginx.org/read.php?2,212533,212533#msg-212533), and a trac > ticket (https://trac.nginx.org/nginx/ticket/493) all to disable automatic > handling of the 100-continue mechanism. The trac ticket was closed because > unbuffered upload was not supported yet, the patch was rejected because it > sounded like it was the other side's problem to solve, and finally the forum > request was rejected because nginx was "designed as [an] accelerator to > minimize backend interaction with a client". Just in case, the patch simply makes nginx to ignore the "Expect: 100-continue" header from the client, it won't make nginx to pass the header to backend servers, or to accept 100 (Continue) responses from backends. > As to that last quoted part, I agree! I'd rather have nginx figure things > out than to have the client finagle with the backend server too much. So > here's what I think should happen. First, the client's Expect header should > not get directly passed on to the upstream server nor should nginx ignore > the header entirely (i.e., keep these things the same as they are today). > Instead, with unbuffered upload, an upstream block with multiple servers, > and proxy_next_upstream set to try another server when one fails: > > 1. Client sends request with "Expect: 100-continue" to nginx > 2. Nginx receives request but does not respond with anything yet > 3. Nginx tries the first eligible server by sending a proxied request with > "Expect: 100-continue" (not passthrough; this is nginx's own logic and this > part exists today as far as I can tell) > 4. If the server responds "100 Continue" to nginx *then* nginx responds "100 > Continue" to the client and the unbuffered upload proceeds to that server > 5. If instead the server fails in a way that proxy_next_upstream is > configured to handle, then nginx still doesn't respond to the client, and > now tries to reach the next eligible server instead. > 6. This process proceeds until a server willing to accept the request is > found or all servers have been tried and none are available, at which point > nginx sends an appropriate non-100 response to the client (502/503/504). > > (Caveat: If an upstream server fails *after* already accepting a request > with "100 Continue", then nginx still has to give up since there's no > buffering.) > > Thoughts? I know there are other ways to solve this problem (e.g. S3-style > multipart uploads), but there is a convenience to the "Expect: 100-continue" > mechanism and it is pretty widely supported. I don't think this goes against > the grain of what nginx is trying to be, especially since unbuffered uploads > are supported now. While something like this might be more efficient than what we currently have, as of now there is no infrastructure in nginx to handle intermediate 1xx responses from backends (and to send them to clients), so it will be not trivial to implement this. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Sat May 1 17:09:20 2021 From: nginx-forum at forum.nginx.org (kbolino) Date: Sat, 01 May 2021 13:09:20 -0400 Subject: Revisiting 100-continue with unbuffered proxying In-Reply-To: References: Message-ID: <1904d70ca81ad21cbf373d9a946a8823.NginxMailingListEnglish@forum.nginx.org> > No, this is not how it works: nginx never use "Expect: > 100-continue" on requests to backends. It is, however, smart > enough to stop sending the body as long as the backend server > responds with an error, so (almost) no bandwidth is wasted. Yeah, that's my fault. I left "proxy_set_header Expect $http_expect" in the config but forgot it was there. I didn't dig deep enough to actually verify the behavior, and simply assumed the presence of the header meant it was all working as expected. > The most simple solution would be to fix the client to do the > same. I suspect the client may have been right all along and the root problem is either on the server end (doesn't close the connection aggressively enough) or at a higher level (request method is not idempotent). Thanks for the citation to the spec though, that gives me a good idea where to target optimizations. > While something like this might be more efficient than what we > currently have, as of now there is no infrastructure in nginx to > handle intermediate 1xx responses from backends (and to send them > to clients), so it will be not trivial to implement this. That's unfortunate, but there's also probably not a lot of demand for this feature either. Thanks for taking the time to respond, Kristian Bolino Posted at Nginx Forum: https://forum.nginx.org/read.php?2,291404,291406#msg-291406 From nginx-forum at forum.nginx.org Sat May 1 18:14:20 2021 From: nginx-forum at forum.nginx.org (satay) Date: Sat, 01 May 2021 14:14:20 -0400 Subject: dns.log parsing Message-ID: Hello, Before I get into this, I thought to ask here if there is a parser available somewhere already? Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,291407,291407#msg-291407 From hongyi.zhao at gmail.com Mon May 3 09:09:55 2021 From: hongyi.zhao at gmail.com (Hongyi Zhao) Date: Mon, 3 May 2021 17:09:55 +0800 Subject: Forwarding to target website via a socks5 proxy. Message-ID: Suppose nginx is running on a host with the FQDN x.y.z and listening on x.y.z:8080, can I let it redirect me to the target website, say, a.b.c:80 via a specific socks5 proxy, say, x.y.z:21080, when I access the website x.y.z:8080? Any hints for this problem will be highly appreciated. Regards, HY -- Assoc. Prof. Hongyi Zhao Theory and Simulation of Materials Hebei Vocational University of Technology and Engineering NO. 552 North Gangtie Road, Xingtai, China From kaushalshriyan at gmail.com Mon May 3 16:47:06 2021 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Mon, 3 May 2021 22:17:06 +0530 Subject: SSL Cipher suites settings in Nginx webserver Message-ID: Hi, I am using Lets Encrypt SSL Certificates for Nginx 1.20.00 webserver running on CentOS Linux release 7.9.2009 (Core). I will appreciate it if someone can guide me to set the cipher suites in the Nginx Webserver config. I am referring to https://ssl-config.mozilla.org/. Is there a way to verify if the below cipher suites set are accurate and are free from any vulnerabilities? $openssl version OpenSSL 1.0.2k-fips 26 Jan 2017 $cat /etc/redhat-release CentOS Linux release 7.9.2009 (Core) $nginx -v nginx version: nginx/1.20.0 ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384; Please guide and I look forward to hearing from you. Thanks in Advance. Best Regards, Kaushal -------------- next part -------------- An HTML attachment was scrubbed... URL: From teward at thomas-ward.net Mon May 3 16:51:51 2021 From: teward at thomas-ward.net (Thomas Ward) Date: Mon, 3 May 2021 12:51:51 -0400 Subject: SSL Cipher suites settings in Nginx webserver In-Reply-To: References: Message-ID: <5551f800-788c-0257-a540-8f6ccae4acd6@thomas-ward.net> The Mozilla configuration tool for ciphers is generally the best source for cipher information, they update it regularly as things change in terms of "best ciphers to utilize" and security issues crop up. All of those ciphers, in my opinion, are fine.? The discussion of whether these ciphers are free from vulnerabilities however is not an NGINX issue, and an OpenSSL / SSL Spec discussion that extends far beyond NGINX. Thomas On 5/3/21 12:47 PM, Kaushal Shriyan wrote: > Hi, > > I am using Lets Encrypt SSL Certificates for Nginx 1.20.00 webserver > running on CentOS Linux release 7.9.2009 (Core). I will appreciate?it > if someone can guide me to set the cipher suites in the Nginx > Webserver config. I am referring to https://ssl-config.mozilla.org/ > . Is there a way to verify if the > below cipher suites set are accurate and are free from any > vulnerabilities? > > $openssl version > OpenSSL 1.0.2k-fips ?26 Jan 2017 > $cat /etc/redhat-release > CentOS Linux release 7.9.2009 (Core) > $nginx -v > nginx version: nginx/1.20.0 > > ssl_ciphers > ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384; > > Please guide and I look forward to hearing from you. Thanks in Advance. > > Best Regards, > > Kaushal > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From lukas at ltri.eu Mon May 3 16:59:28 2021 From: lukas at ltri.eu (Lukas Tribus) Date: Mon, 3 May 2021 18:59:28 +0200 Subject: SSL Cipher suites settings in Nginx webserver In-Reply-To: References: Message-ID: On Mon, 3 May 2021 at 18:47, Kaushal Shriyan wrote: > > Hi, > > Is there a way to verify if the below cipher suites set are accurate > and are free from any vulnerabilities? I suggest you use tools like the public Qualys ssltest: https://www.ssllabs.com/ssltest/ or testssl: https://github.com/drwetter/testssl.sh Lukas From mdounin at mdounin.ru Tue May 4 15:53:41 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 4 May 2021 18:53:41 +0300 Subject: upgrading binary failed - execve - too long argument list In-Reply-To: References: Message-ID: Hello! On Fri, Apr 30, 2021 at 11:14:48AM +0200, Charlie Kilo wrote: > correction.. if i count with > ss -l -t |grep http | wc -l > we have around 58340 listening sockets.. at least on that machine.. Since "listen ... reuseport;" implies a separate listening socket for each worker process, that's expected with 50 worker processes and 1200 "listen ... reuseport" in the configuration. > On Fri, Apr 30, 2021 at 8:27 AM Charlie Kilo wrote: > > > Thanks a lot for the hints so far.. to provide the further info and answer > > the questions.. > > > > getconf ARG_MAX shows 2097152 > > ulimit -s shows 8192 > > setting it to unlimited, doesn't change anything (also not with prlimit) > > wc -c /proc//environ shows 1949 > > > > it seems on a regular machine we have around 1200 listening sockets and do indeed use "reuseport" Thanks, so all seem to be using the defaults. Looking more closely at Linux limits, it seems that you are hitting the MAX_ARG_STRLEN limit. It limits single argument length (and a single environment variable length), and hardcoded to be 32 pages, that is, almost always 128k. With 128k limit nginx should be able to pass about 20k listening sockets, so with 58340 sockets you are well above the limit. An obvious workaround would be to reduce the amount of listening sockets - either by dropping the "reuseport", or by reducing the the number of listening sockets (normally just one on the wildcard address is enough), or by reducing the number of worker processes. It would be interesting to know why you are using such a large number of listening sockets with reuseport enabled. If the use case is reasonable, we probably should consider implementing some workaround for the 128k limit. -- Maxim Dounin http://mdounin.ru/ From me at mheard.com Wed May 5 00:41:48 2021 From: me at mheard.com (Mathew Heard) Date: Wed, 5 May 2021 10:41:48 +1000 Subject: upgrading binary failed - execve - too long argument list In-Reply-To: References: Message-ID: Maxim, Out of curiosity I checked some of my servers with "ss -l -t -p | grep nginx | wc -l" I'm at 15k at most. Large numbers of worker processes make it relatively easy to hit the ~20k limit it seems. With Regards, Mathew On Wed, 5 May 2021 at 01:53, Maxim Dounin wrote: > Hello! > > On Fri, Apr 30, 2021 at 11:14:48AM +0200, Charlie Kilo wrote: > > > correction.. if i count with > > ss -l -t |grep http | wc -l > > we have around 58340 listening sockets.. at least on that machine.. > > Since "listen ... reuseport;" implies a separate listening socket > for each worker process, that's expected with 50 worker processes > and 1200 "listen ... reuseport" in the configuration. > > > On Fri, Apr 30, 2021 at 8:27 AM Charlie Kilo > wrote: > > > > > Thanks a lot for the hints so far.. to provide the further info and > answer > > > the questions.. > > > > > > getconf ARG_MAX shows 2097152 > > > ulimit -s shows 8192 > > > setting it to unlimited, doesn't change anything (also not with > prlimit) > > > wc -c /proc//environ shows 1949 > > > > > > it seems on a regular machine we have around 1200 listening sockets > and do indeed use "reuseport" > > Thanks, so all seem to be using the defaults. > > Looking more closely at Linux limits, it seems that you are > hitting the MAX_ARG_STRLEN limit. It limits single argument > length (and a single environment variable length), and hardcoded > to be 32 pages, that is, almost always 128k. > > With 128k limit nginx should be able to pass about 20k listening > sockets, so with 58340 sockets you are well above the limit. > > An obvious workaround would be to reduce the amount of listening > sockets - either by dropping the "reuseport", or by reducing the > the number of listening sockets (normally just one on the wildcard > address is enough), or by reducing the number of worker processes. > > It would be interesting to know why you are using such a large > number of listening sockets with reuseport enabled. If the use > case is reasonable, we probably should consider implementing some > workaround for the 128k limit. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Wed May 5 10:03:51 2021 From: shahzaib.cb at gmail.com (shahzaib mushtaq) Date: Wed, 5 May 2021 15:03:51 +0500 Subject: Hostname based bandiwdth! Message-ID: Hello, Does nginx provides hostname based bandwidth monitoring? Suppose we've two virtualhosts: domain1.com domain2.com Now we need to calculate how much bandwidth each virtualhost is consuming so at the end of the month we can get a monthly bandwidth usage for each hostname? Regards. Shahzaib Virus-free. www.avast.com <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed May 5 19:43:46 2021 From: nginx-forum at forum.nginx.org (bobbidinho) Date: Wed, 05 May 2021 15:43:46 -0400 Subject: How to rate limit GRPC connections based on authorization (bearer) token in Nginx Ingress? Message-ID: <63015c16a741618849c144881af0cf36.NginxMailingListEnglish@forum.nginx.org> I am trying to rate limit number GRPC connections based on a token included in the Authorization header. I tried the following settings in the Nginx configmap and Ingress annotation but Nginx rate limiting is not working. ``` --- apiVersion: v1 kind: ConfigMap metadata: name: nginx-ingress-controller namespace: default data: http-snippet: | limit_req_zone $http_authorization zone=zone-1:20m rate=10r/m; limit_req_zone $http_token zone=zone-2:20m rate=10r/m; apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/backend-protocol: GRPC nginx.ingress.kubernetes.io/configuration-snippet: | limit_req zone=zone-1; limit_req_log_level notice; limit_req_status 429; ``` I try to have Nginx Ingress Controller to rate limit the GRPC/HTTP2 stream connection based on the value in the $http_authorization variable. I have modified the Nginx log_format to log the $http_authorization value and observe that Nginx receives the value. The problem I am facing is that for some reason the rate limiting rule doesn't get triggered. Is this the correct approach? Any help and feedback would be much appreciated! Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,291431,291431#msg-291431 From osa at freebsd.org.ru Wed May 5 20:09:40 2021 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Wed, 5 May 2021 23:09:40 +0300 Subject: How to rate limit GRPC connections based on authorization (bearer) token in Nginx Ingress? In-Reply-To: <63015c16a741618849c144881af0cf36.NginxMailingListEnglish@forum.nginx.org> References: <63015c16a741618849c144881af0cf36.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi there, hope you're doing well. Please correct me if I wrong, but this looks like a manifest file for NGINX Ingress Controller from Kubernetes project itself, i.e. https://kubernetes.github.io/ingress-nginx/, and if so I'd recommend to switch to NGINX Ingress Controller for Kubernetes, please visit https://github.com/nginxinc/kubernetes-ingress/ to get more details. -- Sergey Osokin On Wed, May 05, 2021 at 03:43:46PM -0400, bobbidinho wrote: > I am trying to rate limit number GRPC connections based on a token included > in the Authorization header. I tried the following settings in the Nginx > configmap and Ingress annotation but Nginx rate limiting is not working. > > ``` > --- > apiVersion: v1 > kind: ConfigMap > metadata: > name: nginx-ingress-controller > namespace: default > data: > http-snippet: | > limit_req_zone $http_authorization zone=zone-1:20m rate=10r/m; > limit_req_zone $http_token zone=zone-2:20m rate=10r/m; > > apiVersion: extensions/v1beta1 > kind: Ingress > metadata: > annotations: > kubernetes.io/ingress.class: nginx > nginx.ingress.kubernetes.io/backend-protocol: GRPC > nginx.ingress.kubernetes.io/configuration-snippet: | > limit_req zone=zone-1; > limit_req_log_level notice; > limit_req_status 429; > ``` > I try to have Nginx Ingress Controller to rate limit the GRPC/HTTP2 stream > connection based on the value in the $http_authorization variable. I have > modified the Nginx log_format to log the $http_authorization value and > observe that Nginx receives the value. The problem I am facing is that for > some reason the rate limiting rule doesn't get triggered. > > Is this the correct approach? > > Any help and feedback would be much appreciated! -- Sergey Osokin From tony.malish at yumaworks.com Wed May 5 21:24:52 2021 From: tony.malish at yumaworks.com (Anatoliy Malishevskiy) Date: Wed, 5 May 2021 14:24:52 -0700 Subject: Cannot disable buffering during SSE connection Message-ID: Hello, I am trying to migrate from Apache2 to NGINX and having issues with SSE connections. Everything else is working fine, the regular GET or POST requests are getting through successfully. But there are 2 critical issues with SSE connections: 1) the NGINX holds up responses until the buffer is full 2) The NGINX blocks any other requests to the server if SSE is in progress. With Apache2 I used to have issue #1, but it was easily resolved with the following configurations: - *OutputBufferSize 0* The issue #2 is really blocking me from using NGINX. Why does the server block other connections when SSE is in progress? How to fix this issue? The restconf application in configuration files is an HTTP/REST thin client application that is called by the FastCGI module in the WEB server to start a single request session for the specified user or SSE stream. client <-> WEB server <-> restconf <-> subsystem netconf server I am seeing that the NGINX keeps buffering my SEE output regardless of my configuration settings. I tried to use the following: - Setting *fastcgi_buffering off *in site config *location* ; - Setting *fastcgi_request_buffering off *in site config *location* ; - Setting header to *X-Accel-Buffering: no *in my restconf application and in the client request. Nothing is helping. The strange part is that the buffering settings are not working at all. The size of the buffer stays the same regardless off the configurations: - fastcgi_buffer_size 4k; - fastcgi_buffers 4 4k; - fastcgi_busy_buffers_size 8k; When I change the above setting nothing is changed in the real program run. The whole event stream waits up to be sent in NGINX whether until after the application finished/terminated or the buffer is full. Please let me know if you would need any more information regarding this issue. Thanks -- Anatoliy Malishevskiy YumaWorks, Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: apache_restconf.conf Type: application/octet-stream Size: 2849 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx-V_log.log Type: text/x-log Size: 1385 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx_restconf Type: application/octet-stream Size: 3538 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx.conf Type: application/octet-stream Size: 1511 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: fastcgi.conf Type: application/octet-stream Size: 1077 bytes Desc: not available URL: From mdounin at mdounin.ru Thu May 6 03:40:20 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 6 May 2021 06:40:20 +0300 Subject: Cannot disable buffering during SSE connection In-Reply-To: References: Message-ID: Hello! On Wed, May 05, 2021 at 02:24:52PM -0700, Anatoliy Malishevskiy wrote: > I am trying to migrate from Apache2 to NGINX and having issues with SSE > connections. Everything else is working fine, the regular GET or POST > requests are getting through successfully. > > But there are 2 critical issues with SSE connections: > 1) the NGINX holds up responses until the buffer is full > 2) The NGINX blocks any other requests to the server if SSE is in progress. > > With Apache2 I used to have issue #1, but it was easily resolved with the > following configurations: > - *OutputBufferSize 0* > The issue #2 is really blocking me from using NGINX. Why does the server > block > other connections when SSE is in progress? > How to fix this issue? [...] The "fastcgi_buffering off;" should be enough to disable response buffering on nginx side. If it doesn't work for you, this means one of the following: - You've not configured it properly: for example, configured it in the wrong server block, or failed to reload configuration, or not using fastcgi_pass to process particular requests. I think this is unlikely, but you may want to double-check: for example, by configuring something like "return 404;" in the relevant location to see if it works. - Some buffering happens in your FastCGI backend. From your configuration it looks like you are using fastcgiwrap, which doesn't seem to provide any ways to flush libfcgi buffers, so probably it is the culprit. Your other problem is likely also due to fastcgiwrap, but this time incorrect configuration: by default it only provides one process, so any in-progress request, such as server-sent events connection, will block all other requests from being handled. You have to configure more fcgiwrap processes using the "-c" command line argument to handle multiple parallel requests. -- Maxim Dounin http://mdounin.ru/ From ESpeake at dmp.com Thu May 6 14:01:40 2021 From: ESpeake at dmp.com (Eric Speake) Date: Thu, 6 May 2021 14:01:40 +0000 Subject: Rewrite/redirect issue Message-ID: <5860cf1639e94399b6c1181ea078e955@dmp.com> I am trying to redirect mydomain.com/summersummit to new.mydomain.com/events/employee-events/summersummit2021. Here is what I have in my config: location /summersummit { return 301 https://new.mydomain.com/events/employee-events/summersummit2021; } I have also tried: rewrite ^/summersummit/(.*)$ https://new.dmp.com/events/employee-events/summersummit2021 permanent; But the redirect doesn't work. It still goes to mydonain.com/summersummit. Thanks for the help. -------------- next part -------------- An HTML attachment was scrubbed... URL: From osa at freebsd.org.ru Thu May 6 15:02:33 2021 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Thu, 6 May 2021 18:02:33 +0300 Subject: Rewrite/redirect issue In-Reply-To: <5860cf1639e94399b6c1181ea078e955@dmp.com> References: <5860cf1639e94399b6c1181ea078e955@dmp.com> Message-ID: Hi Eric, hope you're doing well these days. On Thu, May 06, 2021 at 02:01:40PM +0000, Eric Speake wrote: > I am trying to redirect mydomain.com/summersummit to new.mydomain.com/events/employee-events/summersummit2021. Here is what I have in my config: > > location /summersummit { > return 301 https://new.mydomain.com/events/employee-events/summersummit2021; > } This one should work. > I have also tried: > rewrite ^/summersummit/(.*)$ https://new.dmp.com/events/employee-events/summersummit2021 permanent; > But the redirect doesn't work. It still goes to mydonain.com/summersummit. Could you explain a reason of this second one. Thank you. -- Sergey Osokin From jamesread5737 at gmail.com Thu May 6 16:05:36 2021 From: jamesread5737 at gmail.com (James Read) Date: Thu, 6 May 2021 17:05:36 +0100 Subject: Rewrite/redirect issue In-Reply-To: References: <5860cf1639e94399b6c1181ea078e955@dmp.com> Message-ID: On Thu, May 6, 2021 at 4:02 PM Sergey A. Osokin wrote: > Hi Eric, > > hope you're doing well these days. > > On Thu, May 06, 2021 at 02:01:40PM +0000, Eric Speake wrote: > > I am trying to redirect mydomain.com/summersummit to > new.mydomain.com/events/employee-events/summersummit2021. Here is what I > have in my config: > > > > location /summersummit { > > return 301 > https://new.mydomain.com/events/employee-events/summersummit2021; > > } > > This one should work. > > > I have also tried: > > rewrite ^/summersummit/(.*)$ > https://new.dmp.com/events/employee-events/summersummit2021 permanent; > > But the redirect doesn't work. It still goes to > mydonain.com/summersummit. > > Could you explain a reason of this second one. > Is it anything to do with the typo? > > Thank you. > > -- > Sergey Osokin > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu May 6 17:45:39 2021 From: nginx-forum at forum.nginx.org (amenjava) Date: Thu, 06 May 2021 13:45:39 -0400 Subject: Nginx - How to only allow image requests Message-ID: Hello, I am setting up a proxy that accepts only image requests and sends them to upstream (internet). Sample config: # Limit requests to only images default_type image/webp; if ($request_uri !~* ^.+\.(jpg|jpeg|gif|css|png|html|htm|ico|xml|svg) ) { return 403; break; } The problem is there will be images without extension. How do we tackle that? I did try with Lua but didn't help for eg: https://test.com/shark.png - works (HTTP 200) https://test.com/shark - Doesnt work (HTTP 403) # Testing (this is just for testing If the logic works, but doesn't seem to) header_filter_by_lua ' local val = ngx.header["testheader"] if val then if (val ~= "img") or (val ~= "image") then return ngx.exit(400) end end --- Can anyone provide some pointers around it? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,291446,291446#msg-291446 From ESpeake at dmp.com Thu May 6 19:15:55 2021 From: ESpeake at dmp.com (Eric Speake) Date: Thu, 6 May 2021 19:15:55 +0000 Subject: Rewrite/redirect issue In-Reply-To: <5860cf1639e94399b6c1181ea078e955@dmp.com> References: <5860cf1639e94399b6c1181ea078e955@dmp.com> Message-ID: Looking at the logs I see this: 192.168.250.30 - - [06/May/2021:13:27:27 -0500] "GET /summersummit HTTP/1.1" 301 185 "-" While seeing the 301 it's not redirecting anywhere. Thanks, From: nginx On Behalf Of Eric Speake Sent: Thursday, May 6, 2021 9:02 AM To: nginx at nginx.org Subject: Rewrite/redirect issue I am trying to redirect mydomain.com/summersummit to new.mydomain.com/events/employee-events/summersummit2021. Here is what I have in my config: location /summersummit { &nbs Caution! Message was sent from outside DMP. Please, use proper judgment when opening attachments, clicking links, or replying. Block sender sophospsmartbannerend I am trying to redirect mydomain.com/summersummit to new.mydomain.com/events/employee-events/summersummit2021. Here is what I have in my config: location /summersummit { return 301 https://new.mydomain.com/events/employee-events/summersummit2021; } I have also tried: rewrite ^/summersummit/(.*)$ https://new.dmp.com/events/employee-events/summersummit2021 permanent; But the redirect doesn't work. It still goes to mydonain.com/summersummit. Thanks for the help. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony.malish at yumaworks.com Thu May 6 23:49:10 2021 From: tony.malish at yumaworks.com (Anatoliy Malishevskiy) Date: Thu, 6 May 2021 16:49:10 -0700 Subject: Cannot disable buffering during SSE connection In-Reply-To: References: Message-ID: Thank you! I was able to fix both of these issues. However, the fix for the issue #2 is hacky and I am sure there has to be a better way to fix this issue. Explained below. On Wed, May 5, 2021 at 8:40 PM Maxim Dounin wrote: > Hello! > > On Wed, May 05, 2021 at 02:24:52PM -0700, Anatoliy Malishevskiy wrote: > > > I am trying to migrate from Apache2 to NGINX and having issues with SSE > > connections. Everything else is working fine, the regular GET or POST > > requests are getting through successfully. > > > > But there are 2 critical issues with SSE connections: > > 1) the NGINX holds up responses until the buffer is full > > 2) The NGINX blocks any other requests to the server if SSE is in > progress. > > > > With Apache2 I used to have issue #1, but it was easily resolved with the > > following configurations: > > - *OutputBufferSize 0* > > The issue #2 is really blocking me from using NGINX. Why does the server > > block > > other connections when SSE is in progress? > > How to fix this issue? > > [...] > > The "fastcgi_buffering off;" should be enough to disable response > buffering on nginx side. If it doesn't work for you, this means > one of the following: > > - You've not configured it properly: for example, configured it in > the wrong server block, or failed to reload configuration, or > not using fastcgi_pass to process particular requests. I think > this is unlikely, but you may want to double-check: for example, > by configuring something like "return 404;" in the relevant > location to see if it works. > > - Some buffering happens in your FastCGI backend. From your > configuration it looks like you are using fastcgiwrap, which > doesn't seem to provide any ways to flush libfcgi buffers, so > probably it is the culprit. > You were right pointing out to the fastcgiwrap, it was keeping buffers until the application is terminated or buffer is full. I fix this issue by adding the following lines into the /location: ### When set (e.g., to ""), disables output fastcgiwrap buffering. ### MUST be set if SEE used *fastcgi_param NO_BUFFERING "";* ### When buffering is disabled, the response is passed to a ### client synchronously, immediately as it is received. ### Nginx will not try to read the whole response from the ### FastCGI server. ### MUST be set if SEE used * fastcgi_buffering off;* > > Your other problem is likely also due to fastcgiwrap, but this > time incorrect configuration: by default it only provides one > process, so any in-progress request, such as server-sent events > connection, will block all other requests from being handled. You > have to configure more fcgiwrap processes using the "-c" command > line argument to handle multiple parallel requests. > You were right here. I had to provide the "-c4" parameter, for example, to fcgiwrap process. But I could not find an easy way to set this parameter, I had to hack /etc/init.d/fcgiwrap and change DAEMON_OPTS variable and force the script to stop then start the service again with new parameters when I type: > sudo /etc/init.d/fcgiwrap reload Do you know any better way to provide this -c4 parameter to fcgiwrap? Or I have to create my own fcgiwrap script just only to enable multiple parallel requests handling? Thanks > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Anatoliy Malishevskiy YumaWorks, Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri May 7 09:18:46 2021 From: nginx-forum at forum.nginx.org (spaace) Date: Fri, 07 May 2021 05:18:46 -0400 Subject: Hardening & security Message-ID: Hi, We intend to deploy Nginx as a reverse proxy and want to be sure it is as secure as possible. Are there any recommended scanners to check whether the rules have any holes in them ? eg acutenix? Which is the defacto hardening guide for securing Nginx rules apart from the CIS published ones? Rgds Arun Posted at Nginx Forum: https://forum.nginx.org/read.php?2,291451,291451#msg-291451 From maxim at nginx.com Fri May 7 09:34:53 2021 From: maxim at nginx.com (Maxim Konovalov) Date: Fri, 7 May 2021 12:34:53 +0300 Subject: Hardening & security In-Reply-To: References: Message-ID: Hi, On 07.05.2021 12:18, spaace wrote: > Hi, > > We intend to deploy Nginx as a reverse proxy and want to be sure it is as > secure as possible. > > Are there any recommended scanners to check whether the rules have any holes > in them ? > eg acutenix? > > Which is the defacto hardening guide for securing Nginx rules apart from the > CIS published ones? > I'd additionally take a look at Yandex's Gixy, nginx config scanner: https://github.com/yandex/gixy -- Maxim Konovalov From francis at daoine.org Fri May 7 13:02:05 2021 From: francis at daoine.org (Francis Daly) Date: Fri, 7 May 2021 14:02:05 +0100 Subject: Rewrite/redirect issue In-Reply-To: References: <5860cf1639e94399b6c1181ea078e955@dmp.com> Message-ID: <20210507130205.GC6772@daoine.org> On Thu, May 06, 2021 at 07:15:55PM +0000, Eric Speake wrote: Hi there, > Looking at the logs I see this: > > 192.168.250.30 - - [06/May/2021:13:27:27 -0500] "GET /summersummit HTTP/1.1" 301 185 "-" > > While seeing the 301 it's not redirecting anywhere. What response do you get from something like curl -ki https://mydomain.com/summersummit ? If nginx is sending the correct HTTP 301 Location:, then nginx is configured correctly. If not, then it is not. Using "curl", or something like it, avoids any caching or cleverness that a "full" browser might be doing, and is much easier for testing. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Fri May 7 13:07:37 2021 From: francis at daoine.org (Francis Daly) Date: Fri, 7 May 2021 14:07:37 +0100 Subject: Nginx - How to only allow image requests In-Reply-To: References: Message-ID: <20210507130737.GD6772@daoine.org> On Thu, May 06, 2021 at 01:45:39PM -0400, amenjava wrote: Hi there, > I am setting up a proxy that accepts only image requests and sends them to > upstream (internet). What's an image request? As in: if you are given one specific request, how do you know whether or not you consider it to be an image request? If you can answer that, then the matching nginx config might be clearer. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Fri May 7 14:20:23 2021 From: francis at daoine.org (Francis Daly) Date: Fri, 7 May 2021 15:20:23 +0100 Subject: any datagroup type solution for nginx In-Reply-To: References: Message-ID: <20210507142023.GE6772@daoine.org> On Tue, Apr 27, 2021 at 02:49:04PM +0300, O?uz Yar?mtepe wrote: Hi there, > I have large list of paths going to same backend: > > location ~ /xxx { proxy_pass http://backend_foo; } > location ~ /yyy { proxy_pass http://backend_foo; } > location ~ /zzz { proxy_pass http://backend_foo; } > location ~ /mmm { proxy_pass http://backend_foo; } > .... > > The number of paths are 3254. So writing them line by line doesn't look so > nice to me. any other more performance solution? Maybe with lua and in > memory solution? Are you concerned about writing 3000 lines, or with nginx being able to handle a config with 3000 lines? If it is "writing", then you can use your favourite templating or macro-processing language to turn your input into the expected output. If it is "handling", then I'd suggest that if you don't measure a performance problem, there is not a performance problem that you care about. If the paths are really path-prefixes, then using location ^~ /xxx {} instead of the "~" regex marker will probably be faster to process. And if they are all regex, and you measure a problem, then you might be able to rearrange the order, or group some into fewer, bigger regexes, to see if that helps the problem. But unless you have an error message or a measured performance problem, it is probably worth sticking with the "na?ve" approach. Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Fri May 7 19:31:59 2021 From: nginx-forum at forum.nginx.org (daniel.senior) Date: Fri, 07 May 2021 15:31:59 -0400 Subject: Upstream Connectivity Availability Tweaks and questions Message-ID: <3ed9fcde73de5eef2779b7b77e8333a1.NginxMailingListEnglish@forum.nginx.org> Having an issue during high traffic will get errors in the log with 502, and get these errors--"no live upstreams while connecting to upstream" In my configs it is set to 5 max fails, default of 10s fail time. 1. Does it try the same session within those 10s--every 2 seconds? If I made it 20 seconds would it try every 4 seconds? 2. If another session fails, is it cumulative in that 10s or is it for each session? 3. If the server is marked unavailable; what about the current sessions? 4. What would be a better solution? Extending the fail time? Lessening the fail time and less attempts? Thank you Posted at Nginx Forum: https://forum.nginx.org/read.php?2,291458,291458#msg-291458 From krikkiteer at gmail.com Sat May 8 08:01:19 2021 From: krikkiteer at gmail.com (Charlie Kilo) Date: Sat, 8 May 2021 10:01:19 +0200 Subject: upgrading binary failed - execve - too long argument list In-Reply-To: References: Message-ID: Thanks Maxim for further explaining! We do listen to a huge number of IPs with tons of traffic and huge spikes on them. We really need to avoid any type of congestion, therefore the reuseport. While many of the ip:port combos are simply there for failover purposes and actually aren't "in use", I see right now no feasible way to reduce the number of listening sockets before the upgrade and restore them afterwards. That would be hugely complex, error-prone and would still leave us with a window where the instant failover wouldn't work as expected. As we run a custom kernel anyways, i'll still try to look into adjusting MAX_ARG_STRLEN as a workaround. Having a solution that wouldn't make this necessary would be simply a blast though and greatly appreciated :-) On Fri, Apr 30, 2021 at 11:14 AM Charlie Kilo wrote: > correction.. if i count with > ss -l -t |grep http | wc -l > we have around 58340 listening sockets.. at least on that machine.. > > > On Fri, Apr 30, 2021 at 8:27 AM Charlie Kilo wrote: > >> Thanks a lot for the hints so far.. to provide the further info and >> answer the questions.. >> >> getconf ARG_MAX shows 2097152 >> ulimit -s shows 8192 >> setting it to unlimited, doesn't change anything (also not with prlimit) >> wc -c /proc//environ shows 1949 >> >> it seems on a regular machine we have around 1200 listening sockets and do indeed use "reuseport" >> >> nginx -V shows >> nginx version: nginx/1.18.0 >> built with OpenSSL 1.1.1j 16 Feb 2021 >> TLS SNI support enabled >> configure arguments: --with-ipv6 --with-file-aio --with-http_ssl_module --with-http_v2_module --with-http_realip_module --with-http_stub_status_module --with-http_gzip_static_module --with-http_sub_module --without-http_scgi_module --with-stream --with-stream_ssl_module --with-stream_realip_module --with-stream_geoip_module --with-stream_ssl_preread_module --with-http_slice_module --add-module=../lua-nginx-module-0.10.15 --add-module=../headers-more-v0.25 --add-module=../ngx-devel-kit-master --add-module=../dist/nginx-rtmp-module --add-module=../build/nginx_upstream_check_module --conf-path=/etc/nginx.conf --with-mail --with-mail_ssl_module --without-mail_pop3_module --with-cc-opt='-D_FORTIFY_SOURCE=1 -fstack-protector -fstack-clash-protection -pipe -march=westmere -mtune=intel -O3 -I/build/nginx/boringssl/.compat/include -g' --with-ld-opt='-Wl,-z,relro,-z,now,-lgmp -ldl' >> >> >> >> On Tue, Apr 27, 2021 at 12:35 PM Charlie Kilo >> wrote: >> >>> Hi everyone, >>> i'm trying to upgrade an nginx-binary while running. >>> When i do kill -s USR2 , i get the following error in the logs.. >>> >>> 11:40:38 [alert] 52701#0: execve() failed while executing new binary >>> process "/opt/sbin/nginx" (7: Argument list too long) >>> >>> Anybody knows what exactly is in those arguments ? We have ~ 20-55 >>> worker processes if that might be related.. >>> nginx-version is 1.18.0, os: debian buster >>> >>> thanks everyone in advance! >>> chris >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Sat May 8 16:33:07 2021 From: r at roze.lv (Reinis Rozitis) Date: Sat, 8 May 2021 19:33:07 +0300 Subject: $ssl_protocol in nodejs POST requests after 1.9.4 Message-ID: <002d01d74427$d3d3ded0$7b7b9c70$@roze.lv> Hello. I have a strange issue where for a POST request having any form data Nginx after version 1.9.4 doesn't log $ssl_protocol (or any other $ssl_*) variable. I have a configured custom accesslog: log_format main '... $ssl_protocol $ssl_cipher $server_port'; A simple script ( for example from https://nodejs.dev/learn/make-an-http-post-request-using-nodejs ) will generate following accesslog entry with all the variables being empty: [08/May/2021:19:11:50 +0300] ... "axios/0.21.1" - - 443 The moment you remove the form data everything is being logged: [08/May/2021:19:10:58 +0300] ... HTTP/1.1" 200 772 "-" "axios/0.21.1" TLSv1.2 ECDHE-RSA-AES128-GCM-SHA256 443 I tried to debug the requests and see the difference (besides Content-Length) but I wasn't able to pinpoint the issue. I tried various nodejs libraries ('request' etc) and also native approach - all produce same results - empty POST requests are fine, the moment you post any form data the $ssl_* become empty. I can't reproduce it with curl - tried varios requests with and without data, with chunked encoding etc - everything is being logged as expected. One could say that the problem is on the Node side - but what has changed between 1.9.4 and 1.9.5 that breaks the logging? Maybe someone has any suggestions or ideas how to investigate this further? Wbr From mdounin at mdounin.ru Sat May 8 17:29:28 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 8 May 2021 20:29:28 +0300 Subject: $ssl_protocol in nodejs POST requests after 1.9.4 In-Reply-To: <002d01d74427$d3d3ded0$7b7b9c70$@roze.lv> References: <002d01d74427$d3d3ded0$7b7b9c70$@roze.lv> Message-ID: Hello! On Sat, May 08, 2021 at 07:33:07PM +0300, Reinis Rozitis wrote: > Hello. > I have a strange issue where for a POST request having any form data Nginx > after version 1.9.4 doesn't log $ssl_protocol (or any other $ssl_*) > variable. > > > I have a configured custom accesslog: > > log_format main '... $ssl_protocol $ssl_cipher $server_port'; > > A simple script ( for example from > https://nodejs.dev/learn/make-an-http-post-request-using-nodejs ) will > generate following accesslog entry with all the variables being empty: > > [08/May/2021:19:11:50 +0300] ... "axios/0.21.1" - - 443 > > > The moment you remove the form data everything is being logged: > > [08/May/2021:19:10:58 +0300] ... HTTP/1.1" 200 772 "-" "axios/0.21.1" > TLSv1.2 ECDHE-RSA-AES128-GCM-SHA256 443 > > > I tried to debug the requests and see the difference (besides > Content-Length) but I wasn't able to pinpoint the issue. > I tried various nodejs libraries ('request' etc) and also native approach - > all produce same results - empty POST requests are fine, the moment you post > any form data the $ssl_* become empty. > I can't reproduce it with curl - tried varios requests with and without > data, with chunked encoding etc - everything is being logged as expected. > > > One could say that the problem is on the Node side - but what has changed > between 1.9.4 and 1.9.5 that breaks the logging? > > Maybe someone has any suggestions or ideas how to investigate this further? Thanks for the report, it looks like this change broke things: changeset: 7738:554c6ae25ffc user: Ruslan Ermilov date: Fri Nov 06 23:44:54 2020 +0300 summary: SSL: fixed non-working SSL shutdown on lingering close. If the connection is being closed with lingering close, nginx now shuts down the SSL connection before lingering close, and this happens before the request is logged. As a result $ssl_* variables are not available during request logging. An easy way to reproduce this with arbitrary requests is to use "lingering_close always;". The only fix I can think of is to rewrite the lingering close so it will happen after the request is logged. -- Maxim Dounin http://mdounin.ru/ From r at roze.lv Sat May 8 18:16:39 2021 From: r at roze.lv (Reinis Rozitis) Date: Sat, 8 May 2021 21:16:39 +0300 Subject: $ssl_protocol in nodejs POST requests after 1.9.4 In-Reply-To: References: <002d01d74427$d3d3ded0$7b7b9c70$@roze.lv> Message-ID: <002e01d74436$4a653b30$df2fb190$@roze.lv> > Thanks for the report, it looks like this change broke things: > > changeset: 7738:554c6ae25ffc > > The only fix I can think of is to rewrite the lingering close so it will happen after the request is logged. Thkx Maxim for finding the cause. I suppose that this is considered a bug then? If so do I need to create a request on Trac or will this be enough to be fixed at some point? wbr rr From mdounin at mdounin.ru Sat May 8 19:45:42 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 8 May 2021 22:45:42 +0300 Subject: $ssl_protocol in nodejs POST requests after 1.9.4 In-Reply-To: <002e01d74436$4a653b30$df2fb190$@roze.lv> References: <002d01d74427$d3d3ded0$7b7b9c70$@roze.lv> <002e01d74436$4a653b30$df2fb190$@roze.lv> Message-ID: Hello! On Sat, May 08, 2021 at 09:16:39PM +0300, Reinis Rozitis wrote: > > Thanks for the report, it looks like this change broke things: > > > > changeset: 7738:554c6ae25ffc > > > > The only fix I can think of is to rewrite the lingering close so it will > happen after the request is logged. > > Thkx Maxim for finding the cause. > I suppose that this is considered a bug then? If so do I need to create a > request on Trac or will this be enough to be fixed at some point? Sure, it's certainly a bug and will be fixed, hopefully soon. There should be no need to create a ticket. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Sun May 9 15:26:55 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 9 May 2021 18:26:55 +0300 Subject: upgrading binary failed - execve - too long argument list In-Reply-To: References: Message-ID: Hello! On Sat, May 08, 2021 at 10:01:19AM +0200, Charlie Kilo wrote: > Thanks Maxim for further explaining! > > We do listen to a huge number of IPs with tons of traffic and huge spikes > on them. We really need to avoid any type > of congestion, therefore the reuseport. > > While many of the ip:port combos are simply there for failover purposes and > actually aren't "in use", I see right now > no feasible way to reduce the number of listening sockets before the > upgrade and restore them afterwards. That would be hugely complex, > error-prone and would still leave us with a window where the instant > failover wouldn't work as expected. Thanks for the details. Note that "reuseport" doesn't help with "tons of traffic". It can help in the special case when there are a lot of new connections are established per second, and all these connections are short-lived, so a large part of CPU time is spent in establishing new connections. Further, "reuseport" doesn't help if there are already multiple listening sockets, and new connections are already distributed between these sockets. Its main use is when you have only one listening socket and want to reduce lock contention on this socket by providing additional listening sockets. The only case when "reuseport" is unavoidable in nginx now is when you want to handle UDP proxying with sessions. -- Maxim Dounin http://mdounin.ru/ From krikkiteer at gmail.com Mon May 10 10:02:45 2021 From: krikkiteer at gmail.com (Charlie Kilo) Date: Mon, 10 May 2021 12:02:45 +0200 Subject: upgrading binary failed - execve - too long argument list In-Reply-To: References: Message-ID: Hi! I have to admit, my previous "details" were somewhat vague - by "tons of traffic" i meant exactly what you described - a huge number of shortlived connections per second (some millions in a few seconds) On Sun, May 9, 2021 at 5:27 PM Maxim Dounin wrote: > Hello! > > On Sat, May 08, 2021 at 10:01:19AM +0200, Charlie Kilo wrote: > > > Thanks Maxim for further explaining! > > > > We do listen to a huge number of IPs with tons of traffic and huge spikes > > on them. We really need to avoid any type > > of congestion, therefore the reuseport. > > > > While many of the ip:port combos are simply there for failover purposes > and > > actually aren't "in use", I see right now > > no feasible way to reduce the number of listening sockets before the > > upgrade and restore them afterwards. That would be hugely complex, > > error-prone and would still leave us with a window where the instant > > failover wouldn't work as expected. > > Thanks for the details. > > Note that "reuseport" doesn't help with "tons of traffic". It can > help in the special case when there are a lot of new connections > are established per second, and all these connections are > short-lived, so a large part of CPU time is spent in establishing > new connections. > > Further, "reuseport" doesn't help if there are already multiple > listening sockets, and new connections are already distributed > between these sockets. Its main use is when you have only one > listening socket and want to reduce lock contention on this socket > by providing additional listening sockets. > > The only case when "reuseport" is unavoidable in nginx now is when > you want to handle UDP proxying with sessions. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon May 10 10:53:08 2021 From: nginx-forum at forum.nginx.org (itpp2012) Date: Mon, 10 May 2021 06:53:08 -0400 Subject: upgrading binary failed - execve - too long argument list In-Reply-To: References: Message-ID: You've been busy with this for 2 weeks... It's very easy, 1. kill all nginx processes 2. replace binary 3. start nginx 4. post some vague notice about being offline for 10 seconds and apologize 5. move on with life. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,291370,291474#msg-291474 From krikkiteer at gmail.com Mon May 10 18:46:43 2021 From: krikkiteer at gmail.com (Charlie Kilo) Date: Mon, 10 May 2021 20:46:43 +0200 Subject: upgrading binary failed - execve - too long argument list In-Reply-To: References: Message-ID: let me quickly think about.....nope ;) itpp2012 schrieb am Mo., 10. Mai 2021, 12:53: > You've been busy with this for 2 weeks... > > It's very easy, > 1. kill all nginx processes > 2. replace binary > 3. start nginx > 4. post some vague notice about being offline for 10 seconds and apologize > 5. move on with life. > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,291370,291474#msg-291474 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter.volkov at gmail.com Tue May 11 15:09:51 2021 From: peter.volkov at gmail.com (Peter Volkov) Date: Tue, 11 May 2021 18:09:51 +0300 Subject: why nginx is not compressing reply from proxied server? Message-ID: Hi, I have a HTTP/1.0 web server that streams chunked content to clients. It does not gzip content so I would like to use nginx as reverse proxy to compress output from this server. Yet, no matter what I do nginx is not compressing results. For nginx I'm using following config: server { gzip on; gzip_comp_level 5; gzip_http_version 1.0; gzip_buffers 4 32k; gzip_min_length 0; gzip_types application/octet-stream; gzip_proxied any; gzip_vary on; proxy_buffering off; location / { proxy_pass http://10.239.254.17:9102; } } Then I'm using curl to request content: curl -v IP:9202 -H 'Accept-Encoding: deflate, gzip' -o output ===================================================== > GET / HTTP/1.1 > Host: IP:9202 > User-Agent: curl/7.76.1 > Accept: */* > Accept-Encoding: deflate, gzip > * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < Server: nginx/1.17.5 < Date: Tue, 11 May 2021 15:05:56 GMT < Content-Type: application/octet-stream < Transfer-Encoding: chunked < Connection: keep-alive < Keep-Alive: timeout=20 < Accept-Ranges: none < Cache-Control: no-cache < { [672 bytes data] 100 3343k 0 3343k 0 0 4866k 0 --:--:-- --:--:-- --:--:-- 4859k^C ===================================================== In result output is not compressed. What may cause this behaviour? This is how request/respond to downstream server looks like: curl -v http://10.239.254.17:9102 * Trying 10.239.254.17:9102... * TCP_NODELAY set * Connected to 10.239.254.17 (10.239.254.17) port 9102 (#0) > GET / HTTP/1.1 > Host: 10.239.254.17:9102 > User-Agent: curl/7.66.0 > Accept: */* > * Mark bundle as not supporting multiuse * HTTP 1.0, assume close after body < HTTP/1.0 200 OK < Server: UDP to HTTP tool with smart buffering < Accept-Ranges: none < Content-type: application/octet-stream < Cache-Control: no-cache Thanks in advance for any help, -- Peter. -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter.volkov at gmail.com Tue May 11 15:16:46 2021 From: peter.volkov at gmail.com (Peter Volkov) Date: Tue, 11 May 2021 18:16:46 +0300 Subject: why nginx is not compressing reply from proxied server? In-Reply-To: References: Message-ID: Err, After few hours of debugging, and writing email here, I've realised that I have `gzip off;` in http {} block of configuration. After enabling gzip in http block everything works fine. Is it correct behaviour that no warning is issued with such configuration? -- Peter. On Tue, May 11, 2021 at 6:09 PM Peter Volkov wrote: > Hi, I have a HTTP/1.0 web server that streams chunked content to clients. > It does not gzip content so I would like to use nginx as reverse proxy to > compress output from this server. Yet, no matter what I do nginx is not > compressing results. For nginx I'm using following config: > > server { > gzip on; > gzip_comp_level 5; > gzip_http_version 1.0; > gzip_buffers 4 32k; > gzip_min_length 0; > gzip_types application/octet-stream; > gzip_proxied any; > gzip_vary on; > > proxy_buffering off; > > location / { > proxy_pass http://10.239.254.17:9102; > } > } > > Then I'm using curl to request content: > > curl -v IP:9202 -H 'Accept-Encoding: deflate, gzip' -o output > ===================================================== > > GET / HTTP/1.1 > > Host: IP:9202 > > User-Agent: curl/7.76.1 > > Accept: */* > > Accept-Encoding: deflate, gzip > > > * Mark bundle as not supporting multiuse > < HTTP/1.1 200 OK > < Server: nginx/1.17.5 > < Date: Tue, 11 May 2021 15:05:56 GMT > < Content-Type: application/octet-stream > < Transfer-Encoding: chunked > < Connection: keep-alive > < Keep-Alive: timeout=20 > < Accept-Ranges: none > < Cache-Control: no-cache > < > { [672 bytes data] > 100 3343k 0 3343k 0 0 4866k 0 --:--:-- --:--:-- --:--:-- > 4859k^C > ===================================================== > > In result output is not compressed. What may cause this behaviour? > > This is how request/respond to downstream server looks like: > curl -v http://10.239.254.17:9102 > * Trying 10.239.254.17:9102... > * TCP_NODELAY set > * Connected to 10.239.254.17 (10.239.254.17) port 9102 (#0) > > GET / HTTP/1.1 > > Host: 10.239.254.17:9102 > > User-Agent: curl/7.66.0 > > Accept: */* > > > * Mark bundle as not supporting multiuse > * HTTP 1.0, assume close after body > < HTTP/1.0 200 OK > < Server: UDP to HTTP tool with smart buffering > < Accept-Ranges: none > < Content-type: application/octet-stream > < Cache-Control: no-cache > > > Thanks in advance for any help, > -- > Peter. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue May 11 20:17:24 2021 From: nginx-forum at forum.nginx.org (rsavignon) Date: Tue, 11 May 2021 16:17:24 -0400 Subject: auth_request not passing query string Message-ID: <9537361e31179b80f920157ce5f01563.NginxMailingListEnglish@forum.nginx.org> Hi All, i'm trying to configure a auth based static files server using nginx, but its not working because, i suppose, the $request_uri is not being forwarded to the auth endpoint. Does auth_request support relaying $request_uri ? If someone could give a tip i would be very grateful. server { listen 443 ssl; listen [::]:443 ssl; ssl_certificate ....; ssl_certificate_key ....; root /path/to/static/html; index index.html index.htm index.nginx-debian.html; server_name ...; location / { auth_request /validate; auth_request_set $auth_status $upstream_status; try_files $uri $uri/ =404; } location /validate { proxy_pass https://validation.mysite.com/validate/; proxy_ssl_certificate ...; proxy_ssl_certificate_key ...; proxy_pass_request_headers on; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; proxy_set_header X-Original-URI $request_uri; } } Thanks in advance. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,291486,291486#msg-291486 From mdounin at mdounin.ru Tue May 11 21:23:17 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 12 May 2021 00:23:17 +0300 Subject: why nginx is not compressing reply from proxied server? In-Reply-To: References: Message-ID: Hello! On Tue, May 11, 2021 at 06:16:46PM +0300, Peter Volkov wrote: > Err, After few hours of debugging, and writing email here, I've realised > that I have `gzip off;` in http {} block of configuration. After enabling > gzip in http block everything works fine. Is it correct behaviour that no > warning is issued with such configuration? The configuration specified in http{} block applies, unless you've redefined the configuration in more specific scope, such as server{} or location{}. That is, the following configuration won't use gzip: http { gzip off; server { listen 80; } } But the following will: http { gzip off; server { listen 80; gzip on; } } That's how it is expected to work. If a configuration with "gzip off;" at the http level and "gzip on;" at the server level doesn't work for you, but changing it to "gzip on;" at the http level works, most likely this means that you are looking at the wrong server{} block. Note that the configuration you've provided listens on the port 80 (the default): > > server { > > gzip on; > > gzip_comp_level 5; > > gzip_http_version 1.0; > > gzip_buffers 4 32k; > > gzip_min_length 0; > > gzip_types application/octet-stream; > > gzip_proxied any; > > gzip_vary on; > > > > proxy_buffering off; > > > > location / { > > proxy_pass http://10.239.254.17:9102; > > } > > } And you are testing port 9102, not 80: > > curl -v IP:9202 -H 'Accept-Encoding: deflate, gzip' -o output Most likely this means that you are testing against the wrong server{} block, and that's why "gzip on;" at http{} level works for you. -- Maxim Dounin http://mdounin.ru/ From francis at daoine.org Tue May 11 21:30:44 2021 From: francis at daoine.org (Francis Daly) Date: Tue, 11 May 2021 22:30:44 +0100 Subject: auth_request not passing query string In-Reply-To: <9537361e31179b80f920157ce5f01563.NginxMailingListEnglish@forum.nginx.org> References: <9537361e31179b80f920157ce5f01563.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20210511213044.GF6772@daoine.org> On Tue, May 11, 2021 at 04:17:24PM -0400, rsavignon wrote: Hi there, > Hi All, i'm trying to configure a auth based static files server using > nginx, but its not working because, i suppose, the $request_uri is not being > forwarded to the auth endpoint. Does auth_request support relaying > $request_uri ? If someone could give a tip i would be very grateful. What does the server at https://validation.mysite.com/validate/ see in the X-Original-URI header that it receives? > location /validate { > proxy_pass https://validation.mysite.com/validate/; ... > proxy_set_header X-Original-URI $request_uri; > } And if that is not what it wants to see: what does it want to see instead? Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Tue May 11 23:05:55 2021 From: nginx-forum at forum.nginx.org (rsavignon) Date: Tue, 11 May 2021 19:05:55 -0400 Subject: auth_request not passing query string In-Reply-To: <20210511213044.GF6772@daoine.org> References: <20210511213044.GF6772@daoine.org> Message-ID: <4fbcf6f5a02e69a789ce438c96c68d6c.NginxMailingListEnglish@forum.nginx.org> Hi Francis, thanks for the fast reply. At auth server, the value of "X-Original-URI" header is only "/validate". But should be "/validate?token=" as all request for a static file pass a token query param to the file server (https://files.mysite.com/myvideo.mp4?token=). Cheers, Rafael. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,291486,291489#msg-291489 From francis at daoine.org Tue May 11 23:38:49 2021 From: francis at daoine.org (Francis Daly) Date: Wed, 12 May 2021 00:38:49 +0100 Subject: auth_request not passing query string In-Reply-To: <4fbcf6f5a02e69a789ce438c96c68d6c.NginxMailingListEnglish@forum.nginx.org> References: <20210511213044.GF6772@daoine.org> <4fbcf6f5a02e69a789ce438c96c68d6c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20210511233849.GG6772@daoine.org> On Tue, May 11, 2021 at 07:05:55PM -0400, rsavignon wrote: Hi there, > thanks for the fast reply. At auth server, the value of "X-Original-URI" > header is only "/validate". But should be "/validate?token=" > as all request for a static file pass a token query param to the file server > (https://files.mysite.com/myvideo.mp4?token=). I would expect the header to be /myvideo.mp4?token=, and not to be /validate at all. It seems to work for me, at least using http so that I can see the traffic using tcpdump. (Alternatively, you could add $http_x_original_uri to the "upstream" logging, to see what is received as far as the web service is concerned.) Config: == server { listen 6990; location / { auth_request /validate; auth_request_set $auth_status $upstream_status; try_files $uri $uri/ =404; } location /validate { proxy_pass http://127.0.0.1:6991/validate/; proxy_set_header X-Original-URI $request_uri; } } server { listen 6991; location / { return 200 "request - header: $request_uri - $http_x_original_uri\n"; } } == Test: == sudo tcpdump -nn -i any -A -s 0 port 6991 & == curl -i http://127.0.0.1:6990/file.mp4?token=value == The tcpdump output shows the request from the 6990 server to the 6991 server including == GET /validate/ HTTP/1.0 X-Original-URI: /file.mp4?token=value Host: 127.0.0.1:6991 Connection: close == and it shows the response body from the 6991 server including == request - header: /validate/ - /file.mp4?token=value == Does a test like that show something different for you? Thanks, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Wed May 12 13:55:15 2021 From: nginx-forum at forum.nginx.org (rsavignon) Date: Wed, 12 May 2021 09:55:15 -0400 Subject: auth_request not passing query string In-Reply-To: <20210511233849.GG6772@daoine.org> References: <20210511233849.GG6772@daoine.org> Message-ID: <03473a7d653489390e5624a32cd6a18a.NginxMailingListEnglish@forum.nginx.org> Hi, i could reproduce it and it works as expected. For the sake of logging i replaced validate with my own validate block and the issue persist. the validate server log output: {"connection":"upgrade","host":"localhost","x-original-uri":"/validate","user-agent":"curl/7.68.0","accept":"*/*"} The thing i could figure out is if the validate server is in the same host its works, but if it is o a another host it don't. Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,291486,291492#msg-291492 From francis at daoine.org Wed May 12 14:20:30 2021 From: francis at daoine.org (Francis Daly) Date: Wed, 12 May 2021 15:20:30 +0100 Subject: auth_request not passing query string In-Reply-To: <03473a7d653489390e5624a32cd6a18a.NginxMailingListEnglish@forum.nginx.org> References: <20210511233849.GG6772@daoine.org> <03473a7d653489390e5624a32cd6a18a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20210512142030.GH6772@daoine.org> On Wed, May 12, 2021 at 09:55:15AM -0400, rsavignon wrote: Hi there, > i could reproduce it and it works as expected. For the sake of logging i > replaced validate with my own validate block and the issue persist. the > validate server log output: > > {"connection":"upgrade","host":"localhost","x-original-uri":"/validate","user-agent":"curl/7.68.0","accept":"*/*"} Why does that say host:localhost? Is there any chance that something else in your system is using X-Original-URI as well, and overwriting the header value you set? Perhaps use "X-Testing $request_uri" in the nginx config and see if that changes anything? > The thing i could figure out is if the validate server is in the same host > its works, but if it is o a another host it don't. That seems unlikely to me. Testing shows that it still works for me. So, if you still see the problem, can you show a complete nginx config and sample request that will let someone else see the same problem? Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Thu May 13 15:25:13 2021 From: nginx-forum at forum.nginx.org (spaace) Date: Thu, 13 May 2021 11:25:13 -0400 Subject: Hardening & security In-Reply-To: References: Message-ID: <86c5ca23a3e3c7d5f6f7c9e85a484bf1.NginxMailingListEnglish@forum.nginx.org> Thanks Maxim. I saw this tool too but i was not sure if it has a good breadth of coverage. Their github readme seems to list a few vulnerabilities and i was thinking perhaps that could be inadequate. Thank you. Arun Posted at Nginx Forum: https://forum.nginx.org/read.php?2,291451,291510#msg-291510 From lucas at lucasrolff.com Sun May 16 16:46:17 2021 From: lucas at lucasrolff.com (Lucas Rolff) Date: Sun, 16 May 2021 16:46:17 +0000 Subject: Memory usage in nginx proxy setup and use of min_uses In-Reply-To: References: Message-ID: Hi everyone, I have a few questions regarding proxy_cache and the use of proxy_cache_min_uses in nginx: Let?s assume you have an nginx server with proxy_cache enabled, and you?ve set proxy_cache_min_uses to 5; Q1: How does nginx internally keep track of the count for min_uses? Is it using SHM to do it (and counts towards the key_zone limit?), or something else? Q2: How long time does nginx keep this information for the number of accesses. Let?s say the file gets visited once in a 24 hour period; Would nginx keep the counter at 1 for that whole period, or are there some set timeout where it?s ?flushed?. Q3: If you have a user who decides to access files with a random query string on it; We want to prevent caching this to fill up the storage (The main reason for setting the proxy_cache_min_uses in the first place) ? but are we gonna fill up the memory (and keys_zone limit) regardless; If yes ? is there a way to prevent this? Basically the goal is to understand even just broadly how min_uses are counted, and possibly how to prevent memory from being eaten up in case someone decides to access the same URL once with millions of requests ? if there?s any way to flush out the memory for example, for anything that haven?t yet reached the proxy_cache_min_uses if it indeed uses up memory. Best Regards, Lucas Rolff -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon May 17 14:37:38 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 17 May 2021 17:37:38 +0300 Subject: Memory usage in nginx proxy setup and use of min_uses In-Reply-To: References: Message-ID: Hello! On Sun, May 16, 2021 at 04:46:17PM +0000, Lucas Rolff wrote: > Hi everyone, > > I have a few questions regarding proxy_cache and the use of > proxy_cache_min_uses in nginx: > > Let?s assume you have an nginx server with proxy_cache enabled, > and you?ve set proxy_cache_min_uses to 5; > > Q1: How does nginx internally keep track of the count for > min_uses? Is it using SHM to do it (and counts towards the > key_zone limit?), or something else? > > Q2: How long time does nginx keep this information for the > number of accesses. Let?s say the file gets visited once in a 24 > hour period; Would nginx keep the counter at 1 for that whole > period, or are there some set timeout where it?s ?flushed?. > > Q3: If you have a user who decides to access files with a random > query string on it; We want to prevent caching this to fill up > the storage (The main reason for setting the > proxy_cache_min_uses in the first place) ? but are we gonna fill > up the memory (and keys_zone limit) regardless; If yes ? is > there a way to prevent this? > > Basically the goal is to understand even just broadly how > min_uses are counted, and possibly how to prevent memory from > being eaten up in case someone decides to access the same URL > once with millions of requests ? if there?s any way to flush out > the memory for example, for anything that haven?t yet reached > the proxy_cache_min_uses if it indeed uses up memory. The proxy_cache_min_uses basically means that if nginx sees a request whose uses count not reached the specified limit yet, it won't try to store the response to disk. It will, however, keep the key in the keys_zone with the relevant information, notably the number of uses seen so far. Quoting the proxy_cache_path directive description (http://nginx.org/r/proxy_cache_path): "In addition, all active keys and information about data are stored in a shared memory zone, whose name and size are configured by the keys_zone parameter. One megabyte zone can store about 8 thousand keys." Much like with any cache item, such keys are removed from the keys_zone if no matching requests are seen during the "inactive" time. Similarly, least recently used keys are removed if there is not enough room in the keys_zone. Much like with normal caching, you can control the cache key nginx uses. If you don't want to take query string into account, you may want to configure proxy_cache_key without the query string (see http://nginx.org/r/proxy_cache_key). -- Maxim Dounin http://mdounin.ru/ From lucas at lucasrolff.com Mon May 17 14:47:33 2021 From: lucas at lucasrolff.com (Lucas Rolff) Date: Mon, 17 May 2021 14:47:33 +0000 Subject: Memory usage in nginx proxy setup and use of min_uses In-Reply-To: References: Message-ID: <437D11E4-3881-48C6-B9EE-FF0389699F39@lucasrolff.com> Hi Maxim, Thanks a lot for your reply! I'm indeed aware of the ~8k keys per mb of memory, I was just wondering if it was handled differently when min_uses are in use, but it does indeed make sense that nginx has to keep track of it somehow, and the keys zone makes the most sense! > Much like with any cache item, such keys are removed from the keys_zone if no matching requests are seen during the "inactive" time That's a bummer, since that still allows memory "poisoning" - it would be awesome to have another flag for proxy_cache_path to control how long keys that have not yet reached min_uses are kept in SHM. The benefit of this would be to say if min_uses have not been reached within let's say 5 minutes, then we purge those keys from SHM to clear up the memory. For controlling the cache items - ideally we wanna use query strings as a part of the cache key, but still ideally prevent memory poisoning as above - the inactive flag for min_uses would be pretty useful for this - while it won't prevent it fully, we'd still be able to somewhat control memory even if people are trying to do the cache/memory poisoning. Best Regards, Lucas Rolff ?On 17/05/2021, 16.37, "nginx on behalf of Maxim Dounin" wrote: Hello! On Sun, May 16, 2021 at 04:46:17PM +0000, Lucas Rolff wrote: > Hi everyone, > > I have a few questions regarding proxy_cache and the use of > proxy_cache_min_uses in nginx: > > Let?s assume you have an nginx server with proxy_cache enabled, > and you?ve set proxy_cache_min_uses to 5; > > Q1: How does nginx internally keep track of the count for > min_uses? Is it using SHM to do it (and counts towards the > key_zone limit?), or something else? > > Q2: How long time does nginx keep this information for the > number of accesses. Let?s say the file gets visited once in a 24 > hour period; Would nginx keep the counter at 1 for that whole > period, or are there some set timeout where it?s ?flushed?. > > Q3: If you have a user who decides to access files with a random > query string on it; We want to prevent caching this to fill up > the storage (The main reason for setting the > proxy_cache_min_uses in the first place) ? but are we gonna fill > up the memory (and keys_zone limit) regardless; If yes ? is > there a way to prevent this? > > Basically the goal is to understand even just broadly how > min_uses are counted, and possibly how to prevent memory from > being eaten up in case someone decides to access the same URL > once with millions of requests ? if there?s any way to flush out > the memory for example, for anything that haven?t yet reached > the proxy_cache_min_uses if it indeed uses up memory. The proxy_cache_min_uses basically means that if nginx sees a request whose uses count not reached the specified limit yet, it won't try to store the response to disk. It will, however, keep the key in the keys_zone with the relevant information, notably the number of uses seen so far. Quoting the proxy_cache_path directive description (http://nginx.org/r/proxy_cache_path): "In addition, all active keys and information about data are stored in a shared memory zone, whose name and size are configured by the keys_zone parameter. One megabyte zone can store about 8 thousand keys." Much like with any cache item, such keys are removed from the keys_zone if no matching requests are seen during the "inactive" time. Similarly, least recently used keys are removed if there is not enough room in the keys_zone. Much like with normal caching, you can control the cache key nginx uses. If you don't want to take query string into account, you may want to configure proxy_cache_key without the query string (see http://nginx.org/r/proxy_cache_key). -- Maxim Dounin http://mdounin.ru/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Mon May 17 19:05:59 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 17 May 2021 22:05:59 +0300 Subject: Memory usage in nginx proxy setup and use of min_uses In-Reply-To: <437D11E4-3881-48C6-B9EE-FF0389699F39@lucasrolff.com> References: <437D11E4-3881-48C6-B9EE-FF0389699F39@lucasrolff.com> Message-ID: Hello! On Mon, May 17, 2021 at 02:47:33PM +0000, Lucas Rolff wrote: > Hi Maxim, > > Thanks a lot for your reply! > > I'm indeed aware of the ~8k keys per mb of memory, I was just > wondering if it was handled differently when min_uses are in > use, but it does indeed make sense that nginx has to keep track > of it somehow, and the keys zone makes the most sense! > > > Much like with any cache item, such keys are removed from the > > keys_zone if no matching requests are seen during the > > "inactive" time > > That's a bummer, since that still allows memory "poisoning" - it > would be awesome to have another flag for proxy_cache_path to > control how long keys that have not yet reached min_uses are > kept in SHM. > The benefit of this would be to say if min_uses have not been > reached within let's say 5 minutes, then we purge those keys > from SHM to clear up the memory. > > For controlling the cache items - ideally we wanna use query > strings as a part of the cache key, but still ideally prevent > memory poisoning as above - the inactive flag for min_uses would > be pretty useful for this - while it won't prevent it fully, > we'd still be able to somewhat control memory even if people are > trying to do the cache/memory poisoning. In no particular order: - The attack you are considering is not about "poisoning". At most, it can be used to make the cache less efficient. - The goal "to somewhat control memory" looks confusing: the memory used by caching is hard-limited by the keys_zone size, and it is not possible to use more memory than configured. At most, you can try to limit the number of keys an attacker will be able to put into keys_zone. But using separate inactive timer for keys not reached min_uses won't help here: an attacker who is able to do arbitrary amount of requests will be able to flush all cache items anyway. - Using proxy_cache_min_uses cannot help here, regardless of how it is handled, since nothing stops the attacker from requesting the same resource multiple times. In general, I see two basic options to handle things if you don't want one to be able to reduce your cache efficiency: 1. Strictly limit which resources are to be cached, in particular, by using appropriate proxy_cache_key, as already suggested. 2. Limit the maximum number of requests an attacker can do, so it won't be able to cause noticeable degradation of cache efficiency. In particular, this can be done with limit_req (http://nginx.org/r/limit_req). Also it is always a good idea to make sure your site works fine without caching at all. -- Maxim Dounin http://mdounin.ru/ From lucas at lucasrolff.com Mon May 17 19:33:43 2021 From: lucas at lucasrolff.com (Lucas Rolff) Date: Mon, 17 May 2021 19:33:43 +0000 Subject: Memory usage in nginx proxy setup and use of min_uses In-Reply-To: References: <437D11E4-3881-48C6-B9EE-FF0389699F39@lucasrolff.com> Message-ID: <41F6E760-5E8D-42BE-801B-8FB853A53A59@lucasrolff.com> Hi Maxim! > - The attack you are considering is not about "poisoning". At most, it can be used to make the cache less efficient. Poisoning is probably the wrong word indeed, and since nginx doesn't really handle reaching the limit of keys_zone, it simply starts to return a 500 internal server error. So I don't think it's making the cache less efficient (Other than you won't be able to cache that much), you're ending up breaking nginx because when the keys_zone limit has been reached, nginx simply starts returning 500 internal server error for items that are not already in proxy_cache - if it would do an LRU/LFU on the keys - then yes, you could probably end up with a cache less efficient. But as it stands currently, if one uses $request_uri an attacker could reach the keys_zone limit, and break all traffic that is not yet cached. Even if one would not use $request_uri, but some specific argument where a query string that wouldn't directly affect the output, it could cause the same behavior. An application where avoiding this is hard, is CDNs for example - while ideally one would not use $request_uri as the cache key, it's sometimes required by customer applications. > At most, you can try to limit the number of keys an attacker will be able to put into keys_zone If we take an example of a CDN or any kind of reverse proxy; What impact would it have to have a proxy_cache_path for each domain? Lets say we're talking 10000 domains on a single nginx server. Normally one would share one or a couple (fast/slow storage for example), and the domain would be a part of the cache key If we want to limit (per domain), it would require a proxy_cache_path per domain - which surely would be very flexible, but I also think you're then asking nginx to do a lot more management. > But using separate inactive timer for keys not reached min_uses won't help here: an attacker who is able to do arbitrary amount of requests will be able to flush all cache items anyway. Unless nginx very recently implemented that reaching keys_zone limit, will start purging old cache - then no, it would still break the nginx for non-cached requests (returning 500 internal server error). If nginx has started to purge old things if the limit is reached, then sure the attacker would still be able to wipe out the cache. But let's say we have an "inactive" set to 24+ hours (Which is often used for static files) - an attack where someone would append random query strings - those keys would first be removed after 24 hours (or higher, depending on the limit) - with a separate flag, one could set this counter to something like 60 seconds (So delete the key from memory if the key haven't reached it's min_uses within 60 seconds) - this way, you're still rotating those keys out *a lot* faster. > In particular, this can be done with limit_req If we'd limit this to 20 req/s, this would allow a single IP to use up 1.78 million keys in the keys_zone if "inactive" is 24 hours - do this with 10 IPs, we're at 17.8 million. If we'd flush the keys not reaching min_uses after 1 minute, we'd limit the keys in the keys_zone per IP to 1200 - the attacker can surely keep doing his 20 requests per second, but since we're throwing out things, pretty quickly, we've decreased the "damage" a user can do from 1.78 million keys down to 1200 keys, or even 12000 keys if we'd keep it for 10 minutes. I still think such feature would be awesome, since it would allow better control (and play nicely with the proxy_cache_min_uses directive); proxy_cache_min_uses directive is often used to prevent excessive storage due to not enough hits; Being able to do the same with the keys_zone data as well as a part of it, would (I think) benefit quite a lot. Since it would solve (or at least help mitigate) the above from happening. It makes things just a tad harder to cause troubles. Best Regards, Lucas Rolff ?On 17/05/2021, 21.06, "nginx on behalf of Maxim Dounin" wrote: Hello! On Mon, May 17, 2021 at 02:47:33PM +0000, Lucas Rolff wrote: > Hi Maxim, > > Thanks a lot for your reply! > > I'm indeed aware of the ~8k keys per mb of memory, I was just > wondering if it was handled differently when min_uses are in > use, but it does indeed make sense that nginx has to keep track > of it somehow, and the keys zone makes the most sense! > > > Much like with any cache item, such keys are removed from the > > keys_zone if no matching requests are seen during the > > "inactive" time > > That's a bummer, since that still allows memory "poisoning" - it > would be awesome to have another flag for proxy_cache_path to > control how long keys that have not yet reached min_uses are > kept in SHM. > The benefit of this would be to say if min_uses have not been > reached within let's say 5 minutes, then we purge those keys > from SHM to clear up the memory. > > For controlling the cache items - ideally we wanna use query > strings as a part of the cache key, but still ideally prevent > memory poisoning as above - the inactive flag for min_uses would > be pretty useful for this - while it won't prevent it fully, > we'd still be able to somewhat control memory even if people are > trying to do the cache/memory poisoning. In no particular order: - The attack you are considering is not about "poisoning". At most, it can be used to make the cache less efficient. - The goal "to somewhat control memory" looks confusing: the memory used by caching is hard-limited by the keys_zone size, and it is not possible to use more memory than configured. At most, you can try to limit the number of keys an attacker will be able to put into keys_zone. But using separate inactive timer for keys not reached min_uses won't help here: an attacker who is able to do arbitrary amount of requests will be able to flush all cache items anyway. - Using proxy_cache_min_uses cannot help here, regardless of how it is handled, since nothing stops the attacker from requesting the same resource multiple times. In general, I see two basic options to handle things if you don't want one to be able to reduce your cache efficiency: 1. Strictly limit which resources are to be cached, in particular, by using appropriate proxy_cache_key, as already suggested. 2. Limit the maximum number of requests an attacker can do, so it won't be able to cause noticeable degradation of cache efficiency. In particular, this can be done with limit_req (http://nginx.org/r/limit_req). Also it is always a good idea to make sure your site works fine without caching at all. -- Maxim Dounin http://mdounin.ru/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Tue May 18 01:27:02 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 18 May 2021 04:27:02 +0300 Subject: Memory usage in nginx proxy setup and use of min_uses In-Reply-To: <41F6E760-5E8D-42BE-801B-8FB853A53A59@lucasrolff.com> References: <437D11E4-3881-48C6-B9EE-FF0389699F39@lucasrolff.com> <41F6E760-5E8D-42BE-801B-8FB853A53A59@lucasrolff.com> Message-ID: Hello! On Mon, May 17, 2021 at 07:33:43PM +0000, Lucas Rolff wrote: > Hi Maxim! > > > - The attack you are considering is not about "poisoning". At > > most, it can be used to make the cache less efficient. > > Poisoning is probably the wrong word indeed, and since nginx > doesn't really handle reaching the limit of keys_zone, it simply > starts to return a 500 internal server error. So I don't think > it's making the cache less efficient (Other than you won't be > able to cache that much), you're ending up breaking nginx > because when the keys_zone limit has been reached, nginx simply > starts returning 500 internal server error for items that are > not already in proxy_cache - if it would do an LRU/LFU on the > keys - then yes, you could probably end up with a cache less > efficient. While 500 is possible in some cases, especially in configurations with many worker processes and high request concurrency, even in the worst case it's expected to happen at most for half of the requests, usually much less than that. Further, cache manager monitors the number of cache items in the keys_zone, cleaning things in advance, making 500 almost impossible in practice. If you nevertheless observe 500 being returned in practice, this might be the actual thing to focus on. [...] > Unless nginx very recently implemented that reaching keys_zone > limit, will start purging old cache - then no, it would still > break the nginx for non-cached requests (returning 500 internal > server error). If nginx has started to purge old things if the > limit is reached, then sure the attacker would still be able to > wipe out the cache. Clearing old cache items when it is not possible to allocate a cache node dates back to initial cache support in nginx 0.7.44[1]. And cache manager monitoring of the keys_zone and clearing it in advance dates back to nginx 1.9.13 released about five years ago[2]. Not sure any of these counts as "very recently". > But let's say we have an "inactive" set to 24+ hours (Which is > often used for static files) - an attack where someone would > append random query strings - those keys would first be removed > after 24 hours (or higher, depending on the limit) - with a > separate flag, one could set this counter to something like 60 > seconds (So delete the key from memory if the key haven't > reached it's min_uses within 60 seconds) - this way, you're > still rotating those keys out *a lot* faster. While this may be preferable for some use cases (and sounds close to the "Segmented LRU" cache policy[3]), this certainly don't protect from the attack you've initially described. As previously suggested, an attacker can easily request the same resource several times, moving it to the "normal" category, so it will stay in the cache for 24+ hours you've configured. So instead this distinction might make things worse, making it harder for actually requested resources to get into cache. > > In particular, this can be done with limit_req > > If we'd limit this to 20 req/s, this would allow a single IP to > use up 1.78 million keys in the keys_zone if "inactive" is 24 > hours - do this with 10 IPs, we're at 17.8 million. The basic idea of burst-based limiting the limit_req module implements is that you don't need to set high rates for IP addresses. Rather, you have to configure something you expect to be seen on average per hour (or even day), and allow large enough bursts. So instead of limiting to 20 r/s you can limit to 1 r/m with burst set to, say, 1000. [...] [1] http://hg.nginx.org/nginx/rev/3a8a53c0c42f#l19.478 [2] http://hg.nginx.org/nginx/rev/c9d680b00744 [3] https://en.wikipedia.org/wiki/Cache_replacement_policies#Segmented_LRU_(SLRU) -- Maxim Dounin http://mdounin.ru/ From amila.kdam at gmail.com Tue May 18 01:59:20 2021 From: amila.kdam at gmail.com (Amila Gunathilaka) Date: Tue, 18 May 2021 07:29:20 +0530 Subject: Help: Using Nginx Reverse Proxy bypass traffic in to a application running in a container In-Reply-To: References: Message-ID: > > Hello All ! I have nginx installed on my linux host and* listen on http port 80* and I want to bypass external traffic coming from external load balancer (up-stream server) into my *nginx reverse proxy server (80 port) *and want to bypass that http traffic into y application running in a docker container (application host port 9091), But my nginx configuration file didn't work as it always says *405 method not allowed* error when request passing from nginx into the external load balancer (up-stream server). Is anyone familiar with this kind of problem? my nginx configuration file is below. http { server { listen 80 proxy_protocol; #listen [::]:80 proxy_protocol; server_name 172.25.234.105; set_real_ip_from 172.25.234.2; real_ip_header proxy_protocol; location / { proxy_pass http://127.0.0.1:9091; #proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_set_header X-Real-IP $proxy_protocol_addr; proxy_set_header X-Forwarded-For $proxy_protocol_addr; proxy_cache_bypass $http_upgrade; auth_basic "PROMETHEUS PUSHGATEWAY Login Area"; auth_basic_user_file /etc/nginx/.htpasswd; } } } -- Amila -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue May 18 11:02:08 2021 From: nginx-forum at forum.nginx.org (hkaroly) Date: Tue, 18 May 2021 07:02:08 -0400 Subject: How to spawn fastcgi c++ app on windows? Message-ID: <88445089077dd6ece3a75d7d5ba882fc.NginxMailingListEnglish@forum.nginx.org> I followed http://chriswu.me/blog/writing-hello-world-in-fcgi-with-c-plus-plus/ to create a C++ fastcgi server app together with nginx. On linux is working fine. On Windows 10 however the server process is started by spawn-fcgi but later the FCGI_Accept_r() will return with an "Unkown listenType" internal error. I have the suspicion that spawn-fcgi is broken on Windows since the very same c++ build is working fine with apache. In case of apache there is no need to use spawn-fcgi , it can spawn the fastcgi process by it's own. I think spawn-fcgi is not forwarding the standard input/output and the standard error. I used Cygwin to build spawn-fcgi on windows. Is there an alternative to spawn-fcgi on windows ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,291545,291545#msg-291545 From anoopalias01 at gmail.com Tue May 18 11:14:26 2021 From: anoopalias01 at gmail.com (Anoop Alias) Date: Tue, 18 May 2021 16:44:26 +0530 Subject: net::ERR_HTTP2_SERVER_REFUSED_STREAM Message-ID: Hi, Browser consoles are showing error net::ERR_HTTP2_SERVER_REFUSED_STREAM and resources are not loading when enabling http2 ( see attached screenshot) The error go away when http2 is disabled ################################################# [root at vps ~]# nginx -V nginx version: nginx/1.19.10 built by gcc 4.8.5 20150623 (Red Hat 4.8.5-44) (GCC) built with OpenSSL 1.1.1k 25 Mar 2021 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/etc/nginx/modules --with-pcre=./pcre-8.44 --with-pcre-jit --with-zlib=./zlib-1.2.11 --with-openssl=./openssl-1.1.1k --with-openssl-opt=enable-tls1_3 --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error_log --http-log-path=/var/log/nginx/access_log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/dev/shm/client_temp --http-proxy-temp-path=/dev/shm/proxy_temp --http-fastcgi-temp-path=/dev/shm/fastcgi_temp --http-uwsgi-temp-path=/dev/shm/uwsgi_temp --http-scgi-temp-path=/dev/shm/scgi_temp --user=nobody --group=nobody --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-http_auth_request_module --with-file-aio --with-threads --with-stream --with-stream_ssl_module --with-http_slice_module --with-compat --with-http_v2_module --add-dynamic-module=/usr/local/rvm/gems/ruby-2.6.6/gems/passenger-6.0.7/src/nginx_module --add-dynamic-module=echo-nginx-module-0.61 --add-dynamic-module=headers-more-nginx-module-0.32 --add-dynamic-module=ngx_http_redis-0.3.8 --add-dynamic-module=redis2-nginx-module --add-dynamic-module=srcache-nginx-module-0.31 --add-dynamic-module=ngx_devel_kit-0.3.0 --add-dynamic-module=set-misc-nginx-module-0.31 --add-dynamic-module=ngx_http_geoip2_module --add-dynamic-module=testcookie-nginx-module --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' --with-ld-opt=-Wl,-E ###################################################### I had enabled debug logging, but it is hard to decipher for me the exact cause from the debug log I am using the latest nginx so https://trac.nginx.org/nginx/ticket/2155 is ruled out as well Debug log -- https://autom8n.com/nginx_debug.txt Any help is much appreciated -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: x2.png Type: image/png Size: 146746 bytes Desc: not available URL: From mdounin at mdounin.ru Tue May 18 14:00:36 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 18 May 2021 17:00:36 +0300 Subject: net::ERR_HTTP2_SERVER_REFUSED_STREAM In-Reply-To: References: Message-ID: Hello! On Tue, May 18, 2021 at 04:44:26PM +0530, Anoop Alias wrote: > Browser consoles are showing error net::ERR_HTTP2_SERVER_REFUSED_STREAM and > resources are not loading when enabling http2 ( see attached screenshot) > > The error go away when http2 is disabled > > ################################################# > [root at vps ~]# nginx -V > nginx version: nginx/1.19.10 > built by gcc 4.8.5 20150623 (Red Hat 4.8.5-44) (GCC) > built with OpenSSL 1.1.1k 25 Mar 2021 > TLS SNI support enabled > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx [...] > I had enabled debug logging, but it is hard to decipher for me the exact > cause > from the debug log > > I am using the latest nginx so https://trac.nginx.org/nginx/ticket/2155 is > ruled out as well > > Debug log -- https://autom8n.com/nginx_debug.txt > > Any help is much appreciated >From the debug logs it looks like HTTP/2 connections are closed after the first request: $ grep -a -E '(GOAWAY| accept:)' nginx_debug.txt 2021/05/17 22:27:30 [debug] 9555#9555: *1164 accept: 49.37.177.20:52350 fd:189 2021/05/17 22:27:31 [debug] 9555#9555: *1164 http2 send GOAWAY frame: last sid 1, error 0 2021/05/17 22:27:32 [debug] 9555#9555: *1176 accept: 49.37.177.20:52354 fd:172 2021/05/17 22:27:32 [debug] 9555#9555: *1177 accept: 49.37.177.20:52356 fd:178 2021/05/17 22:27:32 [debug] 9555#9555: *1176 http2 send GOAWAY frame: last sid 1, error 0 2021/05/17 22:27:32 [debug] 9555#9555: *1185 accept: 49.37.177.20:52362 fd:201 2021/05/17 22:27:32 [debug] 9555#9555: *1185 http2 send GOAWAY frame: last sid 1, error 0 2021/05/17 22:27:33 [debug] 9555#9555: *1191 accept: 49.37.177.20:52372 fd:204 2021/05/17 22:27:33 [debug] 9555#9555: *1191 http2 send GOAWAY frame: last sid 1, error 0 2021/05/17 22:27:41 [debug] 9555#9555: *1232 accept: 49.37.177.20:52382 fd:304 2021/05/17 22:27:42 [debug] 9555#9555: *1232 http2 send GOAWAY frame: last sid 1, error 0 2021/05/17 22:27:42 [debug] 9555#9555: *1238 accept: 49.37.177.20:52390 fd:305 2021/05/17 22:27:42 [debug] 9555#9555: *1238 http2 send GOAWAY frame: last sid 1, error 0 2021/05/17 22:27:43 [debug] 9555#9555: *1241 accept: 49.37.177.20:52404 fd:318 2021/05/17 22:27:43 [debug] 9555#9555: *1241 http2 send GOAWAY frame: last sid 1, error 0 2021/05/17 22:27:44 [debug] 9555#9555: *1250 accept: 49.37.177.20:52416 fd:305 2021/05/17 22:27:44 [debug] 9555#9555: *1250 http2 send GOAWAY frame: last sid 1, error 0 2021/05/17 22:27:55 [debug] 9555#9555: *1278 accept: 49.37.177.20:52510 fd:177 2021/05/17 22:27:55 [debug] 9555#9555: *1278 http2 send GOAWAY frame: last sid 1, error 0 2021/05/17 22:28:06 [debug] 9555#9555: *1304 accept: 49.37.177.20:52604 fd:187 2021/05/17 22:28:06 [debug] 9555#9555: *1304 http2 send GOAWAY frame: last sid 1, error 0 2021/05/17 22:28:06 [debug] 9555#9555: *1314 accept: 49.37.177.20:52602 fd:208 2021/05/17 22:28:09 [debug] 9555#9555: *1314 http2 send GOAWAY frame: last sid 0, error 0 Probably you have "keepalive_requests 1;" or "keepalive_timeout 0;" in the configuration, and this breaks things due to browsers not being able to handle GOAWAY, as explained in the ticket #2155. Note that "fixed" status in the ticket #2155 does not mean that the issue is indeed completely resolved and cannot happen in practice: it can, and you certainly seeing the same issue. The only thing we can do on nginx side is to adjust default settings to make it less likely to appear in practice. The only complete solution would be to fix GOAWAY handling in browsers. -- Maxim Dounin http://mdounin.ru/ From amila.kdam at gmail.com Wed May 19 02:25:24 2021 From: amila.kdam at gmail.com (Amila Gunathilaka) Date: Wed, 19 May 2021 07:55:24 +0530 Subject: nginx Digest, Vol 139, Issue 19 In-Reply-To: References: Message-ID: Hi All, Any update for my issue guys ? 2. Help: Using Nginx Reverse Proxy bypass traffic in to a application running in a container (Amila Gunathilaka) Thanks On Tue, May 18, 2021 at 4:44 PM wrote: > Send nginx mailing list submissions to > nginx at nginx.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://mailman.nginx.org/mailman/listinfo/nginx > or, via email, send a message with subject or body 'help' to > nginx-request at nginx.org > > You can reach the person managing the list at > nginx-owner at nginx.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of nginx digest..." > > > Today's Topics: > > 1. Re: Memory usage in nginx proxy setup and use of min_uses > (Maxim Dounin) > 2. Help: Using Nginx Reverse Proxy bypass traffic in to a > application running in a container (Amila Gunathilaka) > 3. How to spawn fastcgi c++ app on windows? (hkaroly) > 4. net::ERR_HTTP2_SERVER_REFUSED_STREAM (Anoop Alias) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Tue, 18 May 2021 04:27:02 +0300 > From: Maxim Dounin > To: nginx at nginx.org > Subject: Re: Memory usage in nginx proxy setup and use of min_uses > Message-ID: > Content-Type: text/plain; charset=us-ascii > > Hello! > > On Mon, May 17, 2021 at 07:33:43PM +0000, Lucas Rolff wrote: > > > Hi Maxim! > > > > > - The attack you are considering is not about "poisoning". At > > > most, it can be used to make the cache less efficient. > > > > Poisoning is probably the wrong word indeed, and since nginx > > doesn't really handle reaching the limit of keys_zone, it simply > > starts to return a 500 internal server error. So I don't think > > it's making the cache less efficient (Other than you won't be > > able to cache that much), you're ending up breaking nginx > > because when the keys_zone limit has been reached, nginx simply > > starts returning 500 internal server error for items that are > > not already in proxy_cache - if it would do an LRU/LFU on the > > keys - then yes, you could probably end up with a cache less > > efficient. > > While 500 is possible in some cases, especially in configurations > with many worker processes and high request concurrency, even in > the worst case it's expected to happen at most for half of the > requests, usually much less than that. Further, cache manager > monitors the number of cache items in the keys_zone, cleaning > things in advance, making 500 almost impossible in practice. > > If you nevertheless observe 500 being returned in practice, this > might be the actual thing to focus on. > > [...] > > > Unless nginx very recently implemented that reaching keys_zone > > limit, will start purging old cache - then no, it would still > > break the nginx for non-cached requests (returning 500 internal > > server error). If nginx has started to purge old things if the > > limit is reached, then sure the attacker would still be able to > > wipe out the cache. > > Clearing old cache items when it is not possible to allocate a > cache node dates back to initial cache support in nginx 0.7.44[1]. > And cache manager monitoring of the keys_zone and clearing it in > advance dates back to nginx 1.9.13 released about five years > ago[2]. Not sure any of these counts as "very recently". > > > But let's say we have an "inactive" set to 24+ hours (Which is > > often used for static files) - an attack where someone would > > append random query strings - those keys would first be removed > > after 24 hours (or higher, depending on the limit) - with a > > separate flag, one could set this counter to something like 60 > > seconds (So delete the key from memory if the key haven't > > reached it's min_uses within 60 seconds) - this way, you're > > still rotating those keys out *a lot* faster. > > While this may be preferable for some use cases (and sounds close > to the "Segmented LRU" cache policy[3]), this certainly don't > protect from the attack you've initially described. As previously > suggested, an attacker can easily request the same resource > several times, moving it to the "normal" category, so it will stay > in the cache for 24+ hours you've configured. So instead this > distinction might make things worse, making it harder for actually > requested resources to get into cache. > > > > In particular, this can be done with limit_req > > > > If we'd limit this to 20 req/s, this would allow a single IP to > > use up 1.78 million keys in the keys_zone if "inactive" is 24 > > hours - do this with 10 IPs, we're at 17.8 million. > > The basic idea of burst-based limiting the limit_req module > implements is that you don't need to set high rates for IP > addresses. Rather, you have to configure something you expect to > be seen on average per hour (or even day), and allow large enough > bursts. So instead of limiting to 20 r/s you can limit to 1 r/m > with burst set to, say, 1000. > > [...] > > [1] http://hg.nginx.org/nginx/rev/3a8a53c0c42f#l19.478 > [2] http://hg.nginx.org/nginx/rev/c9d680b00744 > [3] > https://en.wikipedia.org/wiki/Cache_replacement_policies#Segmented_LRU_(SLRU) > > -- > Maxim Dounin > http://mdounin.ru/ > > > ------------------------------ > > Message: 2 > Date: Tue, 18 May 2021 07:29:20 +0530 > From: Amila Gunathilaka > To: nginx at nginx.org, nginx-request at nginx.org > Subject: Help: Using Nginx Reverse Proxy bypass traffic in to a > application running in a container > Message-ID: > 0NegXm2WXg at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > > > > Hello All ! > > > I have nginx installed on my linux host and* listen on http port 80* and I > want to bypass external traffic coming from external load balancer > (up-stream server) into my *nginx reverse proxy server (80 port) *and want > to bypass that http traffic into y application running in a docker > container (application host port 9091), > > But my nginx configuration file didn't work as it always says *405 method > not allowed* error when request passing from nginx into the external load > balancer (up-stream server). > > Is anyone familiar with this kind of problem? my nginx configuration file > is below. > > http { > server { > listen 80 proxy_protocol; > #listen [::]:80 proxy_protocol; > server_name 172.25.234.105; > set_real_ip_from 172.25.234.2; > real_ip_header proxy_protocol; > > location / { > proxy_pass http://127.0.0.1:9091; > #proxy_http_version 1.1; > proxy_set_header Upgrade $http_upgrade; > proxy_set_header Connection 'upgrade'; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $proxy_protocol_addr; > proxy_set_header X-Forwarded-For $proxy_protocol_addr; > proxy_cache_bypass $http_upgrade; > auth_basic "PROMETHEUS PUSHGATEWAY Login Area"; > auth_basic_user_file /etc/nginx/.htpasswd; > } > } > } > > -- > Amila > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://mailman.nginx.org/pipermail/nginx/attachments/20210518/f4d248f5/attachment-0001.htm > > > > ------------------------------ > > Message: 3 > Date: Tue, 18 May 2021 07:02:08 -0400 > From: "hkaroly" > To: nginx at nginx.org > Subject: How to spawn fastcgi c++ app on windows? > Message-ID: > < > 88445089077dd6ece3a75d7d5ba882fc.NginxMailingListEnglish at forum.nginx.org> > > Content-Type: text/plain; charset=UTF-8 > > I followed > http://chriswu.me/blog/writing-hello-world-in-fcgi-with-c-plus-plus/ to > create a C++ fastcgi server app together with nginx. On linux is working > fine. > > On Windows 10 however the server process is started by spawn-fcgi but later > the FCGI_Accept_r() will return with an "Unkown listenType" internal error. > I have the suspicion that spawn-fcgi is broken on Windows since the very > same c++ build is working fine with apache. In case of apache there is no > need to use spawn-fcgi , it can spawn the fastcgi process by it's own. I > think spawn-fcgi is not forwarding the standard input/output and the > standard error. > > I used Cygwin to build spawn-fcgi on windows. > > Is there an alternative to spawn-fcgi on windows ? > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,291545,291545#msg-291545 > > > > ------------------------------ > > Message: 4 > Date: Tue, 18 May 2021 16:44:26 +0530 > From: Anoop Alias > To: Nginx > Subject: net::ERR_HTTP2_SERVER_REFUSED_STREAM > Message-ID: > OnJwvexiFiUg at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > Hi, > > Browser consoles are showing error net::ERR_HTTP2_SERVER_REFUSED_STREAM and > resources are not loading when enabling http2 ( see attached screenshot) > > The error go away when http2 is disabled > > ################################################# > [root at vps ~]# nginx -V > nginx version: nginx/1.19.10 > built by gcc 4.8.5 20150623 (Red Hat 4.8.5-44) (GCC) > built with OpenSSL 1.1.1k 25 Mar 2021 > TLS SNI support enabled > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx > --modules-path=/etc/nginx/modules --with-pcre=./pcre-8.44 --with-pcre-jit > --with-zlib=./zlib-1.2.11 --with-openssl=./openssl-1.1.1k > --with-openssl-opt=enable-tls1_3 --conf-path=/etc/nginx/nginx.conf > --error-log-path=/var/log/nginx/error_log > --http-log-path=/var/log/nginx/access_log --pid-path=/var/run/nginx.pid > --lock-path=/var/run/nginx.lock > --http-client-body-temp-path=/dev/shm/client_temp > --http-proxy-temp-path=/dev/shm/proxy_temp > --http-fastcgi-temp-path=/dev/shm/fastcgi_temp > --http-uwsgi-temp-path=/dev/shm/uwsgi_temp > --http-scgi-temp-path=/dev/shm/scgi_temp --user=nobody --group=nobody > --with-http_ssl_module --with-http_realip_module > --with-http_addition_module --with-http_sub_module --with-http_dav_module > --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module > --with-http_gzip_static_module --with-http_random_index_module > --with-http_secure_link_module --with-http_stub_status_module > --with-http_auth_request_module --with-file-aio --with-threads > --with-stream --with-stream_ssl_module --with-http_slice_module > --with-compat --with-http_v2_module > > --add-dynamic-module=/usr/local/rvm/gems/ruby-2.6.6/gems/passenger-6.0.7/src/nginx_module > --add-dynamic-module=echo-nginx-module-0.61 > --add-dynamic-module=headers-more-nginx-module-0.32 > --add-dynamic-module=ngx_http_redis-0.3.8 > --add-dynamic-module=redis2-nginx-module > --add-dynamic-module=srcache-nginx-module-0.31 > --add-dynamic-module=ngx_devel_kit-0.3.0 > --add-dynamic-module=set-misc-nginx-module-0.31 > --add-dynamic-module=ngx_http_geoip2_module > --add-dynamic-module=testcookie-nginx-module --with-cc-opt='-O2 -g -pipe > -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong > --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' > --with-ld-opt=-Wl,-E > ###################################################### > > I had enabled debug logging, but it is hard to decipher for me the exact > cause > from the debug log > > I am using the latest nginx so https://trac.nginx.org/nginx/ticket/2155 is > ruled out as well > > Debug log -- https://autom8n.com/nginx_debug.txt > > Any help is much appreciated > > -- > *Anoop P Alias* > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://mailman.nginx.org/pipermail/nginx/attachments/20210518/20362cc2/attachment.htm > > > -------------- next part -------------- > A non-text attachment was scrubbed... > Name: x2.png > Type: image/png > Size: 146746 bytes > Desc: not available > URL: < > http://mailman.nginx.org/pipermail/nginx/attachments/20210518/20362cc2/attachment.png > > > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > ------------------------------ > > End of nginx Digest, Vol 139, Issue 19 > ************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed May 19 08:27:30 2021 From: francis at daoine.org (Francis Daly) Date: Wed, 19 May 2021 09:27:30 +0100 Subject: Help: Using Nginx Reverse Proxy bypass traffic in to a application running in a container In-Reply-To: References: Message-ID: <20210519082730.GA11167@daoine.org> On Tue, May 18, 2021 at 07:29:20AM +0530, Amila Gunathilaka wrote: Hi there, I'm not entirely sure what your setup is, so I will describe what I think you have; please correct me where I am wrong. > I have nginx installed on my linux host and* listen on http port 80* and I > want to bypass external traffic coming from external load balancer > (up-stream server) into my *nginx reverse proxy server (80 port) *and want > to bypass that http traffic into y application running in a docker > container (application host port 9091), I think you have "the client" (which is "the user with the web browser"); which makes a http request to "the external load balancer". That talks to your nginx, which expects a proxy_protocol-then-http request. And nginx makes a http request to "the container application", on 127.0.0.1:9091 > But my nginx configuration file didn't work as it always says *405 method > not allowed* error when request passing from nginx into the external load > balancer (up-stream server). In nginx terms, in the setup I have described above, "upstream" is "the container application", not the external load balancer. That won't affect the problem, but might help searching the web for help. So -- can you show an example request that does not give the response that you want? Some thing like curl -v http://load-balancer/whatever will probably be helpful as a start. Feel free to remove any names or addresses that you consider private, before pasting the response. Right now, it is not clear to me if the 405 is coming from the load balancer, from nginx, or from the container application. The fix will likely be different in each case. Possibly the logs from each of the servers will indicate how far things get, and where they first fail. You might spot something obvious in there, if you can easily find them. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Wed May 19 16:51:16 2021 From: nginx-forum at forum.nginx.org (ladushka) Date: Wed, 19 May 2021 12:51:16 -0400 Subject: NGINX and 5xx Message-ID: <6fd364ba6aa2d88add1d3966c0b8ab52.NginxMailingListEnglish@forum.nginx.org> Hello Friends, is there way to pass 5xx to client? to be clear: Not mark upstream server as down, instead of this pass all 5xx to client (ignore all errors) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,291570,291570#msg-291570 From pluknet at nginx.com Wed May 19 17:10:05 2021 From: pluknet at nginx.com (Sergey Kandaurov) Date: Wed, 19 May 2021 20:10:05 +0300 Subject: NGINX and 5xx In-Reply-To: <6fd364ba6aa2d88add1d3966c0b8ab52.NginxMailingListEnglish@forum.nginx.org> References: <6fd364ba6aa2d88add1d3966c0b8ab52.NginxMailingListEnglish@forum.nginx.org> Message-ID: <269F367D-57D0-431E-A7F9-0828C124F5EB@nginx.com> > On 19 May 2021, at 19:51, ladushka wrote: > > Hello Friends, > is there way to pass 5xx to client? > to be clear: Not mark upstream server as down, instead of this pass all 5xx > to client (ignore all errors) > This break into the two parts: passing errors, such as 5xx, to client and preventing upstream server from marking as down. You may want to adjust proxy_next_upstream to reconsider what an unsuccessful attempt is used to be, to pass 5xx to client. Besides that, an unsuccessful attempt counts toward max_fails. max_fails=0 is used to disable such an accounting. http://nginx.org/en/docs/http/ngx_http_upstream_module.html#max_fails -- Sergey Kandaurov From lucas at lucasrolff.com Wed May 19 18:44:45 2021 From: lucas at lucasrolff.com (Lucas Rolff) Date: Wed, 19 May 2021 18:44:45 +0000 Subject: Memory usage in nginx proxy setup and use of min_uses In-Reply-To: References: <437D11E4-3881-48C6-B9EE-FF0389699F39@lucasrolff.com> <41F6E760-5E8D-42BE-801B-8FB853A53A59@lucasrolff.com> Message-ID: <95157B2B-524C-4862-B258-516AB34B679B@lucasrolff.com> > If you nevertheless observe 500 being returned in practice, this might be the actual thing to focus on. Even with sub 100 requests and 4 workers, I've experienced it multiple times, where simply because the number of cache keys got exceeded, it was throwing 500 internal server errors for new uncached requests for hours on end (The particular instance, I have about 300 expired keys per 5 minutes) When it happens again, I'll obviously investigate further if it's not supposed to happen. > an attacker can easily request the same resource several times, moving it to the "normal" category Correct, an attacker can almost always find ways to do things if they want to, I've just yet to see them being "smart" enough to request the things multiple times. Even if it's not an attacker, but a misconfigured application (That isn't directly managed by whoever manage the nginx server), if an application for example would pass through identifiers in the URI (imagine gclid or fbclid hashes) - these types of IDs are generally unique per visitor, query strings may differ, but we're only going to see that request once or twice in 99% of the cases where this happens. As a result of that we do not fill the disk because of min_uses, but we do fill the memory because it isn't cleared out before reaching the inactive option. So at least in use-cases like that, we'd often be able to mitigate somewhat misconfigured applications - it's quite common within the CDN industry to see this issue anyway. While the ones running the CDN then obviously have to reach out to the customer and ask them to fix their application, it would be awesome to have a more proactive approach available, that would limit the importance of an urgent fix. What I can hear is that you don't see the point of such feature, that's fine __ I guess the alternative is to use lua to hook into nginx for the cache metadata/shm (probably needs a custom nginx module as well since the shm isn't exposed in lua); Then one should be able to wipe out the keys that are useless that way. Best Regards, Lucas Rolff ?On 18/05/2021, 03.27, "nginx on behalf of Maxim Dounin" wrote: Hello! On Mon, May 17, 2021 at 07:33:43PM +0000, Lucas Rolff wrote: > Hi Maxim! > > > - The attack you are considering is not about "poisoning". At > > most, it can be used to make the cache less efficient. > > Poisoning is probably the wrong word indeed, and since nginx > doesn't really handle reaching the limit of keys_zone, it simply > starts to return a 500 internal server error. So I don't think > it's making the cache less efficient (Other than you won't be > able to cache that much), you're ending up breaking nginx > because when the keys_zone limit has been reached, nginx simply > starts returning 500 internal server error for items that are > not already in proxy_cache - if it would do an LRU/LFU on the > keys - then yes, you could probably end up with a cache less > efficient. While 500 is possible in some cases, especially in configurations with many worker processes and high request concurrency, even in the worst case it's expected to happen at most for half of the requests, usually much less than that. Further, cache manager monitors the number of cache items in the keys_zone, cleaning things in advance, making 500 almost impossible in practice. If you nevertheless observe 500 being returned in practice, this might be the actual thing to focus on. [...] > Unless nginx very recently implemented that reaching keys_zone > limit, will start purging old cache - then no, it would still > break the nginx for non-cached requests (returning 500 internal > server error). If nginx has started to purge old things if the > limit is reached, then sure the attacker would still be able to > wipe out the cache. Clearing old cache items when it is not possible to allocate a cache node dates back to initial cache support in nginx 0.7.44[1]. And cache manager monitoring of the keys_zone and clearing it in advance dates back to nginx 1.9.13 released about five years ago[2]. Not sure any of these counts as "very recently". > But let's say we have an "inactive" set to 24+ hours (Which is > often used for static files) - an attack where someone would > append random query strings - those keys would first be removed > after 24 hours (or higher, depending on the limit) - with a > separate flag, one could set this counter to something like 60 > seconds (So delete the key from memory if the key haven't > reached it's min_uses within 60 seconds) - this way, you're > still rotating those keys out *a lot* faster. While this may be preferable for some use cases (and sounds close to the "Segmented LRU" cache policy[3]), this certainly don't protect from the attack you've initially described. As previously suggested, an attacker can easily request the same resource several times, moving it to the "normal" category, so it will stay in the cache for 24+ hours you've configured. So instead this distinction might make things worse, making it harder for actually requested resources to get into cache. > > In particular, this can be done with limit_req > > If we'd limit this to 20 req/s, this would allow a single IP to > use up 1.78 million keys in the keys_zone if "inactive" is 24 > hours - do this with 10 IPs, we're at 17.8 million. The basic idea of burst-based limiting the limit_req module implements is that you don't need to set high rates for IP addresses. Rather, you have to configure something you expect to be seen on average per hour (or even day), and allow large enough bursts. So instead of limiting to 20 r/s you can limit to 1 r/m with burst set to, say, 1000. [...] [1] http://hg.nginx.org/nginx/rev/3a8a53c0c42f#l19.478 [2] http://hg.nginx.org/nginx/rev/c9d680b00744 [3] https://en.wikipedia.org/wiki/Cache_replacement_policies#Segmented_LRU_(SLRU) -- Maxim Dounin http://mdounin.ru/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Wed May 19 20:32:44 2021 From: nginx-forum at forum.nginx.org (ladushka) Date: Wed, 19 May 2021 16:32:44 -0400 Subject: NGINX and 5xx In-Reply-To: <269F367D-57D0-431E-A7F9-0828C124F5EB@nginx.com> References: <269F367D-57D0-431E-A7F9-0828C124F5EB@nginx.com> Message-ID: <9402e6364912564372ddc615adf46d0f.NginxMailingListEnglish@forum.nginx.org> Hello Sergey, Thank you for your answer. i already set max_fails=0 , but in case my backend start sending 5xx nginx consider it as Down. that why i,m here .. :( Posted at Nginx Forum: https://forum.nginx.org/read.php?2,291570,291573#msg-291573 From nginx-forum at forum.nginx.org Thu May 20 07:25:48 2021 From: nginx-forum at forum.nginx.org (mbrother) Date: Thu, 20 May 2021 03:25:48 -0400 Subject: Nginx mail proxy - ensure sender match authenticated user Message-ID: <7d5a46ba1da1023a2f2da52feb683638.NginxMailingListEnglish@forum.nginx.org> Hello, I am a fan of nginx and I really like nginx mail proxy module. I'm having a problem between the authenticated account and the sender when using this module. For better understanding, please see my test below: root at nginx:~# telnet xx.xx.xx.xx 25 Trying xx.xx.xx.xx ... Connected to xx.xx.xx.xx . Escape character is '^]'. 220 smtp.xxx.xxx ESMTP ready ehlo mail.example.com 250-smtp. xxx.xxx 250 AUTH LOGIN AUTH LOGIN 334 VXNlcm5hbWU6 xxxxxxxxxxxxxxxxxxxxxxxxxx 334 UGFzc3dvcmQ6 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 235 2.0.0 OK mail from: admin at gmail.com 250 OK Sender ok rcpt to: admin at gmail.com 250 OK Recipient ok data 354 Start mail input; end with . test test . 250 OK quit 221 Service closing transmission channel Connection closed by foreign host. As you have seen, after successful authentication, I can send email using any account and nginx skips checking if this account matches the previously authenticated account. I want to write more code in the source code of the mail module in nginx to be able to check if this account matches the previously authenticated account, however I don't know C programming nor understand how functions in .c source code files work. Currently my server is being used to spam emails a lot. So I would really appreciate it if you could give me some advice to fix my problem. Thank you very much! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,291574,291574#msg-291574 From SPAM_TRAP_gmane at jonz.net Thu May 20 13:07:23 2021 From: SPAM_TRAP_gmane at jonz.net (Jonesy) Date: Thu, 20 May 2021 13:07:23 -0000 (UTC) Subject: nginx Digest, Vol 139, Issue 19 References: Message-ID: On Wed, 19 May 2021 07:55:24 +0530, Amila Gunathilaka wrote: > > Any update for my issue guys ? Not if you are going to re-post THE ENTIRE ^%$&%&%! DIGEST From mdounin at mdounin.ru Thu May 20 13:28:57 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 20 May 2021 16:28:57 +0300 Subject: Nginx mail proxy - ensure sender match authenticated user In-Reply-To: <7d5a46ba1da1023a2f2da52feb683638.NginxMailingListEnglish@forum.nginx.org> References: <7d5a46ba1da1023a2f2da52feb683638.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Thu, May 20, 2021 at 03:25:48AM -0400, mbrother wrote: > I am a fan of nginx and I really like nginx mail proxy module. I'm having a > problem between the authenticated account and the sender when using this > module. For better understanding, please see my test below: > > root at nginx:~# telnet xx.xx.xx.xx 25 > Trying xx.xx.xx.xx ... > Connected to xx.xx.xx.xx . > Escape character is '^]'. > 220 smtp.xxx.xxx ESMTP ready > ehlo mail.example.com > 250-smtp. xxx.xxx > 250 AUTH LOGIN > AUTH LOGIN > 334 VXNlcm5hbWU6 > xxxxxxxxxxxxxxxxxxxxxxxxxx > 334 UGFzc3dvcmQ6 > xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx > 235 2.0.0 OK > mail from: admin at gmail.com > 250 OK Sender ok > rcpt to: admin at gmail.com > 250 OK Recipient ok > data > 354 Start mail input; end with . > test > test > . > 250 OK > quit > 221 Service closing transmission channel > Connection closed by foreign host. > > As you have seen, after successful authentication, I can send email using > any account and nginx skips checking if this account matches the previously > authenticated account. After successful authentication nginx establishes an opaque pipe between the client and the backend server, and no longer controls what the client does. It's up to the backend server to check if the client is allowed to send relevant messages or not. -- Maxim Dounin http://mdounin.ru/ From rosolino.todaro at strabag.com Thu May 20 17:36:21 2021 From: rosolino.todaro at strabag.com (Rosolino Todaro) Date: Thu, 20 May 2021 17:36:21 +0000 Subject: Ignore Content-Length Header with NGINX configured as reverse-proxy Message-ID: Hello All, Is it possible to configure NGINX to ignore the Content-Length header when used as a reverse-proxy? Ideally for an individual location block while unaffecting the rest of the configuration. Currently, if the client sends a POST with Content-Length of 32767 then 32767 bytes makes it to the internal application and not a byte more! As this is for an implementation of RTSP over HTTP(S), the client cannot disconnect to resend > 32767 bytes. The current reverse-proxy block: location ~ ^\/(live|archive) { proxy_pass http://127.0.0.1:3900; chunked_transfer_encoding off; proxy_buffering off; proxy_request_buffering off; } nginx version: nginx/1.17.9 Much obliged! Rosolino Todaro -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu May 20 18:42:10 2021 From: nginx-forum at forum.nginx.org (foobug) Date: Thu, 20 May 2021 14:42:10 -0400 Subject: Add files to cache manually Message-ID: <9cab4ae3c8dd5043851f6b74d435059c.NginxMailingListEnglish@forum.nginx.org> I have nginx setup to proxy file uploads. The file upload streams into nginx, goes into my upstream, and then the upstream streams it to Amazon S3 (after some basic processing). In addition to proxying the upload, I also want to proxy the download, with some caching on the nginx side. It would be ideal if I could add the uploaded file directly to the nginx cache after it is uploaded, so nginx does not have to re-download it from S3 immediately after it was uploaded there. One option is to keep a copy of the file on disk (outside of the nginx cache). Then use something like try_files to read it, and have that response be cached by nginx. But then I end up with 2 of the files on disk (one in my try_files directory, and one in the nginx cache). I also need to manually manage the files stored in my try_files directory (to delete them after they enter the nginx cache). This is kind of ugly. If I can reverse engineer the nginx cache file format, is there a way to tell the nginx cache manager that it exists? Right now, it seems like the cache manager will disregard any files in the nginx cache that it doesn't know about. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,291581,291581#msg-291581 From nginx-forum at forum.nginx.org Fri May 21 02:13:01 2021 From: nginx-forum at forum.nginx.org (mbrother) Date: Thu, 20 May 2021 22:13:01 -0400 Subject: Nginx mail proxy - ensure sender match authenticated user In-Reply-To: References: Message-ID: <6f0599094f9a6ba918acf54fdfe830ce.NginxMailingListEnglish@forum.nginx.org> Hello Maxim, Thank you for your answer. As you know, nginx does not send Auth command to backend server, so there's no way for the backend to know if the sender matches the authenticated account. I tried proxy_smtp_auth config but nginx send mail COMMAND to my server but it can not understand :( Posted at Nginx Forum: https://forum.nginx.org/read.php?2,291574,291583#msg-291583 From mauro.tridici at cmcc.it Fri May 21 09:16:46 2021 From: mauro.tridici at cmcc.it (Mauro Tridici) Date: Fri, 21 May 2021 11:16:46 +0200 Subject: failed (104: Connection reset by peer) while proxying connection Message-ID: <7828EE00-F45C-4C09-864F-B4AF9B59E496@cmcc.it> Dear Users, I?m noticing a these error messages in /var/log/nginx/error.log. 021/05/21 10:57:25 [error] 21145#0: *7 recv() failed (104: Connection reset by peer) while proxying connection, client: public_ip, server: 0.0.0.0:5044, upstream: "private_ip:5044", bytes from/to client:321/7709, bytes from/to upstream:7709/321 2021/05/21 10:58:07 [error] 21145#0: *9 recv() failed (104: Connection reset by peer) while proxying connection, client: public_ip, server: 0.0.0.0:5044, upstream: "private_ip:5044", bytes from/to client:321/7709, bytes from/to upstream:7709/321 2021/05/21 10:58:46 [error] 21145#0: *11 recv() failed (104: Connection reset by peer) while proxying connection, client: public_ip, server: 0.0.0.0:5044, upstream: "private_ip:5044", bytes from/to client:321/7709, bytes from/to upstream:7709/321 2021/05/21 10:59:19 [error] 21145#0: *13 recv() failed (104: Connection reset by peer) while proxying connection, client: public_ip, server: 0.0.0.0:5044, upstream: "private_ip:5044", bytes from/to client:321/7709, bytes from/to upstream:7709/321 2021/05/21 10:59:57 [error] 21145#0: *15 recv() failed (104: Connection reset by peer) while proxying connection, client: public_ip, server: 0.0.0.0:5044, upstream: "private_ip:5044", bytes from/to client:321/7709, bytes from/to upstream:7709/321 2021/05/21 11:00:55 [error] 21145#0: *17 recv() failed (104: Connection reset by peer) while proxying connection, client: public_ip, server: 0.0.0.0:5044, upstream: "private_ip:5044", bytes from/to client:321/7709, bytes from/to upstream:7709/321 2021/05/21 11:01:38 [error] 21145#0: *19 recv() failed (104: Connection reset by peer) while proxying connection, client: public_ip, server: 0.0.0.0:5044, upstream: "private_ip:5044", bytes from/to client:321/7709, bytes from/to upstream:7709/321 2021/05/21 11:02:33 [error] 21145#0: *21 recv() failed (104: Connection reset by peer) while proxying connection, client: public_ip, server: 0.0.0.0:5044, upstream: "private_ip:5044", bytes from/to client:321/7709, bytes from/to upstream:7709/321 2021/05/21 11:03:06 [error] 21145#0: *23 recv() failed (104: Connection reset by peer) while proxying connection, client: public_ip, server: 0.0.0.0:5044, upstream: "private_ip:5044", bytes from/to client:321/7709, bytes from/to upstream:7709/321 2021/05/21 11:03:39 [error] 21145#0: *25 recv() failed (104: Connection reset by peer) while proxying connection, client: public_ip, server: 0.0.0.0:5044, upstream: "private_ip:5044", bytes from/to client:321/7709, bytes from/to upstream:7709/321 2021/05/21 11:04:33 [error] 21145#0: *27 recv() failed (104: Connection reset by peer) while proxying connection, client: public_ip, server: 0.0.0.0:5044, upstream: "private_ip:5044", bytes from/to client:321/7709, bytes from/to upstream:7709/321 Basically, it seems that the error is related to a particular (authorized) IP address. This remote FILEBEAT client is sending data to LOGSTASH server via NGINX. Do you some suggestion to fix this annoying issue? Thank you in advancee, Mauro From r at roze.lv Fri May 21 11:56:55 2021 From: r at roze.lv (Reinis Rozitis) Date: Fri, 21 May 2021 14:56:55 +0300 Subject: Add files to cache manually In-Reply-To: <9cab4ae3c8dd5043851f6b74d435059c.NginxMailingListEnglish@forum.nginx.org> References: <9cab4ae3c8dd5043851f6b74d435059c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <003801d74e38$65c5b5b0$31512110$@roze.lv> > One option is to keep a copy of the file on disk (outside of the nginx cache). Then use something like try_files to read it, and have that response be cached by nginx. But then I end up with 2 of the files on disk (one in my try_files directory, and one in the nginx cache). I also need to manually manage the files stored in my try_files directory (to delete them after they enter the nginx cache). This is kind of ugly. Is there a reason why you need nginx "cache" instead of just storing the files statically? One way would be instead of using the cache just store the files as is (in the same structure) with proxy_store http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_store (there is even a small configuration example). The only drawback in this is that you have to manage the cache directory yourself (delete old files / implement LRU if needed etc), but that's usually not too hard with `find` and works just fine. rr From mdounin at mdounin.ru Fri May 21 14:44:57 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 21 May 2021 17:44:57 +0300 Subject: Nginx mail proxy - ensure sender match authenticated user In-Reply-To: <6f0599094f9a6ba918acf54fdfe830ce.NginxMailingListEnglish@forum.nginx.org> References: <6f0599094f9a6ba918acf54fdfe830ce.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Thu, May 20, 2021 at 10:13:01PM -0400, mbrother wrote: > Thank you for your answer. As you know, nginx does not send Auth command to > backend server, so there's no way for the backend to know if the sender > matches the authenticated account. I tried proxy_smtp_auth config but nginx > send mail COMMAND to my server but it can not understand :( By default, for SMTP nginx uses the XCLIENT command (http://nginx.org/r/xclient). It allows nginx to pass all the relevant information about the client, including the login, IP address, and more. Alternatively, starting with nginx 1.19.4 it can be configured to proxy SMTP authentication (http://nginx.org/r/proxy_smtp_auth). While limited compared to XCLIENT, this still passes the client login to the backend server. If neither of these work for you, you probably want to focus on your SMTP server configuration instead. A good start would be to configure it to work properly without nginx in front of it. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Fri May 21 16:18:51 2021 From: nginx-forum at forum.nginx.org (foobug) Date: Fri, 21 May 2021 12:18:51 -0400 Subject: Add files to cache manually In-Reply-To: <003801d74e38$65c5b5b0$31512110$@roze.lv> References: <003801d74e38$65c5b5b0$31512110$@roze.lv> Message-ID: > Is there a reason why you need nginx "cache" instead of just storing the files statically? I don't have a big enough disk to store *all* user uploaded files accumulated over the years. So I need some way to manage a pool of space to store hot uploads. > One way would be instead of using the cache just store the files as is (in the same structure) with proxy_store I did not know about proxy_store. But that may come in handy if I have to implement everything myself. Thank you. > The only drawback in this is that you have to manage the cache directory yourself (delete old files / implement LRU if needed etc), but that's usually not too hard with `find` and works just fine. Right. The devil is in the detail, which is why I'd prefer to lean on the robustness of nginx. It seems like nginx is 99% there. It is "just" lacking a way to inject an HTTP response into the cache (i.e. a way to warm up the cache). Posted at Nginx Forum: https://forum.nginx.org/read.php?2,291581,291592#msg-291592 From anoopalias01 at gmail.com Sat May 22 01:58:21 2021 From: anoopalias01 at gmail.com (Anoop Alias) Date: Sat, 22 May 2021 07:28:21 +0530 Subject: failed (104: Connection reset by peer) while proxying connection In-Reply-To: <7828EE00-F45C-4C09-864F-B4AF9B59E496@cmcc.it> References: <7828EE00-F45C-4C09-864F-B4AF9B59E496@cmcc.it> Message-ID: The private_ip:5044 is closing the connection before completing the request. You should check the log in the upstream server for why it is doing this. Perhaps a security module or something that drop connection immediately etc On Fri, May 21, 2021 at 2:46 PM Mauro Tridici wrote: > > Dear Users, > > I?m noticing a these error messages in /var/log/nginx/error.log. > > 021/05/21 10:57:25 [error] 21145#0: *7 recv() failed (104: Connection > reset by peer) while proxying connection, client: public_ip, server: > 0.0.0.0:5044, upstream: "private_ip:5044", bytes from/to client:321/7709, > bytes from/to upstream:7709/321 > 2021/05/21 10:58:07 [error] 21145#0: *9 recv() failed (104: Connection > reset by peer) while proxying connection, client: public_ip, server: > 0.0.0.0:5044, upstream: "private_ip:5044", bytes from/to client:321/7709, > bytes from/to upstream:7709/321 > 2021/05/21 10:58:46 [error] 21145#0: *11 recv() failed (104: Connection > reset by peer) while proxying connection, client: public_ip, server: > 0.0.0.0:5044, upstream: "private_ip:5044", bytes from/to client:321/7709, > bytes from/to upstream:7709/321 > 2021/05/21 10:59:19 [error] 21145#0: *13 recv() failed (104: Connection > reset by peer) while proxying connection, client: public_ip, server: > 0.0.0.0:5044, upstream: "private_ip:5044", bytes from/to client:321/7709, > bytes from/to upstream:7709/321 > 2021/05/21 10:59:57 [error] 21145#0: *15 recv() failed (104: Connection > reset by peer) while proxying connection, client: public_ip, server: > 0.0.0.0:5044, upstream: "private_ip:5044", bytes from/to client:321/7709, > bytes from/to upstream:7709/321 > 2021/05/21 11:00:55 [error] 21145#0: *17 recv() failed (104: Connection > reset by peer) while proxying connection, client: public_ip, server: > 0.0.0.0:5044, upstream: "private_ip:5044", bytes from/to client:321/7709, > bytes from/to upstream:7709/321 > 2021/05/21 11:01:38 [error] 21145#0: *19 recv() failed (104: Connection > reset by peer) while proxying connection, client: public_ip, server: > 0.0.0.0:5044, upstream: "private_ip:5044", bytes from/to client:321/7709, > bytes from/to upstream:7709/321 > 2021/05/21 11:02:33 [error] 21145#0: *21 recv() failed (104: Connection > reset by peer) while proxying connection, client: public_ip, server: > 0.0.0.0:5044, upstream: "private_ip:5044", bytes from/to client:321/7709, > bytes from/to upstream:7709/321 > 2021/05/21 11:03:06 [error] 21145#0: *23 recv() failed (104: Connection > reset by peer) while proxying connection, client: public_ip, server: > 0.0.0.0:5044, upstream: "private_ip:5044", bytes from/to client:321/7709, > bytes from/to upstream:7709/321 > 2021/05/21 11:03:39 [error] 21145#0: *25 recv() failed (104: Connection > reset by peer) while proxying connection, client: public_ip, server: > 0.0.0.0:5044, upstream: "private_ip:5044", bytes from/to client:321/7709, > bytes from/to upstream:7709/321 > 2021/05/21 11:04:33 [error] 21145#0: *27 recv() failed (104: Connection > reset by peer) while proxying connection, client: public_ip, server: > 0.0.0.0:5044, upstream: "private_ip:5044", bytes from/to client:321/7709, > bytes from/to upstream:7709/321 > > Basically, it seems that the error is related to a particular (authorized) > IP address. > This remote FILEBEAT client is sending data to LOGSTASH server via NGINX. > > Do you some suggestion to fix this annoying issue? > > Thank you in advancee, > Mauro > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From mauro.tridici at cmcc.it Sat May 22 07:52:28 2021 From: mauro.tridici at cmcc.it (Mauro Tridici) Date: Sat, 22 May 2021 09:52:28 +0200 Subject: failed (104: Connection reset by peer) while proxying connection In-Reply-To: References: <7828EE00-F45C-4C09-864F-B4AF9B59E496@cmcc.it> Message-ID: <9D382369-6DCF-423A-941D-5D512DCE0CBD@cmcc.it> Thank you very much for your reply, Anoop. I will collect the upstream logs and I will let you know. Have a great day. Mauro > On 22 May 2021, at 03:58, Anoop Alias wrote: > > The private_ip:5044 is closing the connection before completing the request. > > You should check the log in the upstream server for why it is doing this. Perhaps a security module or something that drop connection immediately etc > > On Fri, May 21, 2021 at 2:46 PM Mauro Tridici > wrote: > > Dear Users, > > I?m noticing a these error messages in /var/log/nginx/error.log. > > 021/05/21 10:57:25 [error] 21145#0: *7 recv() failed (104: Connection reset by peer) while proxying connection, client: public_ip, server: 0.0.0.0:5044 , upstream: "private_ip:5044", bytes from/to client:321/7709, bytes from/to upstream:7709/321 > 2021/05/21 10:58:07 [error] 21145#0: *9 recv() failed (104: Connection reset by peer) while proxying connection, client: public_ip, server: 0.0.0.0:5044 , upstream: "private_ip:5044", bytes from/to client:321/7709, bytes from/to upstream:7709/321 > 2021/05/21 10:58:46 [error] 21145#0: *11 recv() failed (104: Connection reset by peer) while proxying connection, client: public_ip, server: 0.0.0.0:5044 , upstream: "private_ip:5044", bytes from/to client:321/7709, bytes from/to upstream:7709/321 > 2021/05/21 10:59:19 [error] 21145#0: *13 recv() failed (104: Connection reset by peer) while proxying connection, client: public_ip, server: 0.0.0.0:5044 , upstream: "private_ip:5044", bytes from/to client:321/7709, bytes from/to upstream:7709/321 > 2021/05/21 10:59:57 [error] 21145#0: *15 recv() failed (104: Connection reset by peer) while proxying connection, client: public_ip, server: 0.0.0.0:5044 , upstream: "private_ip:5044", bytes from/to client:321/7709, bytes from/to upstream:7709/321 > 2021/05/21 11:00:55 [error] 21145#0: *17 recv() failed (104: Connection reset by peer) while proxying connection, client: public_ip, server: 0.0.0.0:5044 , upstream: "private_ip:5044", bytes from/to client:321/7709, bytes from/to upstream:7709/321 > 2021/05/21 11:01:38 [error] 21145#0: *19 recv() failed (104: Connection reset by peer) while proxying connection, client: public_ip, server: 0.0.0.0:5044 , upstream: "private_ip:5044", bytes from/to client:321/7709, bytes from/to upstream:7709/321 > 2021/05/21 11:02:33 [error] 21145#0: *21 recv() failed (104: Connection reset by peer) while proxying connection, client: public_ip, server: 0.0.0.0:5044 , upstream: "private_ip:5044", bytes from/to client:321/7709, bytes from/to upstream:7709/321 > 2021/05/21 11:03:06 [error] 21145#0: *23 recv() failed (104: Connection reset by peer) while proxying connection, client: public_ip, server: 0.0.0.0:5044 , upstream: "private_ip:5044", bytes from/to client:321/7709, bytes from/to upstream:7709/321 > 2021/05/21 11:03:39 [error] 21145#0: *25 recv() failed (104: Connection reset by peer) while proxying connection, client: public_ip, server: 0.0.0.0:5044 , upstream: "private_ip:5044", bytes from/to client:321/7709, bytes from/to upstream:7709/321 > 2021/05/21 11:04:33 [error] 21145#0: *27 recv() failed (104: Connection reset by peer) while proxying connection, client: public_ip, server: 0.0.0.0:5044 , upstream: "private_ip:5044", bytes from/to client:321/7709, bytes from/to upstream:7709/321 > > Basically, it seems that the error is related to a particular (authorized) IP address. > This remote FILEBEAT client is sending data to LOGSTASH server via NGINX. > > Do you some suggestion to fix this annoying issue? > > Thank you in advancee, > Mauro > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > -- > Anoop P Alias > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sat May 22 14:10:01 2021 From: nginx-forum at forum.nginx.org (raphy) Date: Sat, 22 May 2021 10:10:01 -0400 Subject: What's the problem with this nginx configuration? Message-ID: <14eacde04ccefbd8bc6be27d3430041a.NginxMailingListEnglish@forum.nginx.org> Hi!! Due to some issues in packages installed which caused the freezing of the system, I had to re-install Ubuntu from scratch. Now the previous nginx configuration, which previously worked fine, gives this error: ginx: [emerg] a duplicate default server for 0.0.0.0:80 in /etc/nginx/conf.d/default.conf:54 nginx: configuration file /etc/nginx/nginx.conf test failed This is /etc/nginx/conf.g/default.conf : server { listen 443 ssl http2 default_server; server_name grasp.deals; ssl_certificate /etc/letsencrypt/live/grasp.deals/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/grasp.deals/privkey.pem; # managed by Certbot ssl_trusted_certificate /etc/letsencrypt/live/grasp.deals/chain.pem; ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot ssl_session_timeout 5m; ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers EECDH+CHACHA20:EECDH+CHACHA20-draft:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:50m; #ssl_stapling on; #ssl_stapling_verify on; access_log /var/log/nginx/graspdeals-access.log combined; add_header Strict-Transport-Security "max-age=31536000"; location = /favicon.ico { access_log off; log_not_found off; } location / { proxy_pass http://127.0.0.1:8080; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; #proxy_set_header Host $host; } # http://nginx.org/en/docs/http/ngx_http_core_module.html#try_files # https://unix.stackexchange.com/questions/585963/nginx-configuration-how-to-load-static-files-other-than-index-html/586567#586567 location /weights { root /home/raphy/www; try_files $uri $uri/ =404; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # Following is necessary for Websocket support proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } } server { if ($host = grasp.deals) { return 301 https://$host$request_uri; } # managed by Certbot listen 80 default_server; listen [::]:80 default_server; error_page 497 https://$host:$server_port$request_uri; server_name ggc.world; return 301 https://$server_name$request_uri; access_log /var/log/nginx/grapdeals-access.log combined; add_header Strict-Transport-Security "max-age=31536000"; location = /favicon.ico { access_log off; log_not_found off; } location / { proxy_pass http://127.0.0.1:8080; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; #proxy_set_header Host $host; } } # https://www.nginx.com/blog/nginx-nodejs-websockets-socketio/ # https://gist.github.com/uorat/10b15a32f3ffa3f240662b9b0fefe706 # http://nginx.org/en/docs/stream/ngx_stream_core_module.html upstream websocket { ip_hash; server localhost:3000; } server { listen 81; server_name grasp.deals; #location / { location ~ ^/(websocket|websocket\/socket-io) { proxy_pass http://127.0.0.1:4201; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header X-Forwared-For $remote_addr; proxy_set_header Host $host; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; } } # https://stackoverflow.com/questions/40516288/webpack-dev-server-with-nginx-proxy-pass upstream golang-webserver { ip_hash; server 127.0.0.1:2000; } server { root /puser/add; ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers EECDH+CHACHA20:EECDH+CHACHA20-draft:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:50m; location / { proxy_pass http://golang-webserver; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; #proxy_set_header Host $host; } } What's the problem with this nginx configuration? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,291599,291599#msg-291599 From teward at thomas-ward.net Sat May 22 16:35:41 2021 From: teward at thomas-ward.net (Thomas Ward) Date: Sat, 22 May 2021 12:35:41 -0400 Subject: What's the problem with this nginx configuration? In-Reply-To: <14eacde04ccefbd8bc6be27d3430041a.NginxMailingListEnglish@forum.nginx.org> References: <14eacde04ccefbd8bc6be27d3430041a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <45a75d0f-f3e3-40ed-8651-03596564377a@thomas-ward.net> The error is self explanatory.? You have two default_server entries that end up listening on port 80 on all IPs. listen 80 default_server; listen [::]:80 default_server; As configured this ends up listening on every port 80 on all IPs.? Remove one of these to resolve the error. ?Get BlueMail for Android ? -------- Original Message -------- From: raphy Sent: Sat May 22 10:10:01 EDT 2021 To: nginx at nginx.org Subject: What's the problem with this nginx configuration? Hi!! Due to some issues in packages installed which caused the freezing of the system, I had to re-install Ubuntu from scratch. Now the previous nginx configuration, which previously worked fine, gives this error: ginx: [emerg] a duplicate default server for 0.0.0.0:80 in /etc/nginx/conf.d/default.conf:54 nginx: configuration file /etc/nginx/nginx.conf test failed This is /etc/nginx/conf.g/default.conf : server { listen 443 ssl http2 default_server; server_name grasp.deals; ssl_certificate /etc/letsencrypt/live/grasp.deals/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/grasp.deals/privkey.pem; # managed by Certbot ssl_trusted_certificate /etc/letsencrypt/live/grasp.deals/chain.pem; ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot ssl_session_timeout 5m; ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers EECDH+CHACHA20:EECDH+CHACHA20-draft:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:50m; #ssl_stapling on; #ssl_stapling_verify on; access_log /var/log/nginx/graspdeals-access.log combined; add_header Strict-Transport-Security "max-age=31536000"; location = /favicon.ico { access_log off; log_not_found off; } location / { proxy_pass http://127.0.0.1:8080; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; #proxy_set_header Host $host; } # http://nginx.org/en/docs/http/ngx_http_core_module.html#try_files # https://unix.stackexchange.com/questions/585963/nginx-configuration-how-to-load-static-files-other-than-index-html/586567#586567 location /weights { root /home/raphy/www; try_files $uri $uri/ =404; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # Following is necessary for Websocket support proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } } server { if ($host = grasp.deals) { return 301 https://$host$request_uri; } # managed by Certbot listen 80 default_server; listen [::]:80 default_server; error_page 497 https://$host:$server_port$request_uri; server_name ggc.world; return 301 https://$server_name$request_uri; access_log /var/log/nginx/grapdeals-access.log combined; add_header Strict-Transport-Security "max-age=31536000"; location = /favicon.ico { access_log off; log_not_found off; } location / { proxy_pass http://127.0.0.1:8080; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; #proxy_set_header Host $host; } } # https://www.nginx.com/blog/nginx-nodejs-websockets-socketio/ # https://gist.github.com/uorat/10b15a32f3ffa3f240662b9b0fefe706 # http://nginx.org/en/docs/stream/ngx_stream_core_module.html upstream websocket { ip_hash; server localhost:3000; } server { listen 81; server_name grasp.deals; #location / { location ~ ^/(websocket|websocket\/socket-io) { proxy_pass http://127.0.0.1:4201; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header X-Forwared-For $remote_addr; proxy_set_header Host $host; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; } } # https://stackoverflow.com/questions/40516288/webpack-dev-server-with-nginx-proxy-pass upstream golang-webserver { ip_hash; server 127.0.0.1:2000; } server { root /puser/add; ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers EECDH+CHACHA20:EECDH+CHACHA20-draft:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:50m; location / { proxy_pass http://golang-webserver; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; #proxy_set_header Host $host; } } What's the problem with this nginx configuration? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,291599,291599#msg-291599 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue May 25 15:37:12 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 25 May 2021 18:37:12 +0300 Subject: nginx-1.21.0 Message-ID: Changes with nginx 1.21.0 25 May 2021 *) Security: 1-byte memory overwrite might occur during DNS server response processing if the "resolver" directive was used, allowing an attacker who is able to forge UDP packets from the DNS server to cause worker process crash or, potentially, arbitrary code execution (CVE-2021-23017). *) Feature: variables support in the "proxy_ssl_certificate", "proxy_ssl_certificate_key" "grpc_ssl_certificate", "grpc_ssl_certificate_key", "uwsgi_ssl_certificate", and "uwsgi_ssl_certificate_key" directives. *) Feature: the "max_errors" directive in the mail proxy module. *) Feature: the mail proxy module supports POP3 and IMAP pipelining. *) Feature: the "fastopen" parameter of the "listen" directive in the stream module. Thanks to Anbang Wen. *) Bugfix: special characters were not escaped during automatic redirect with appended trailing slash. *) Bugfix: connections with clients in the mail proxy module might be closed unexpectedly when using SMTP pipelining. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue May 25 15:37:41 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 25 May 2021 18:37:41 +0300 Subject: nginx-1.20.1 Message-ID: Changes with nginx 1.20.1 25 May 2021 *) Security: 1-byte memory overwrite might occur during DNS server response processing if the "resolver" directive was used, allowing an attacker who is able to forge UDP packets from the DNS server to cause worker process crash or, potentially, arbitrary code execution (CVE-2021-23017). -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue May 25 15:39:26 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 25 May 2021 18:39:26 +0300 Subject: nginx security advisory (CVE-2021-23017) Message-ID: Hello! A security issue in nginx resolver was identified, which might allow an attacker to cause 1-byte memory overwrite by using a specially crafted DNS response, resulting in worker process crash or, potentially, in arbitrary code execution (CVE-2021-23017). The issue only affects nginx if the "resolver" directive is used in the configuration file. Further, the attack is only possible if an attacker is able to forge UDP packets from the DNS server. The issue affects nginx 0.6.18 - 1.20.0. The issue is fixed in nginx 1.21.0, 1.20.1. Patch for the issue can be found here: http://nginx.org/download/patch.2021.resolver.txt Thanks to Luis Merino, Markus Vervier, Eric Sesterhenn, X41 D-Sec GmbH. -- Maxim Dounin http://nginx.org/ From amila.kdam at gmail.com Tue May 25 16:17:47 2021 From: amila.kdam at gmail.com (Amila Gunathilaka) Date: Tue, 25 May 2021 21:47:47 +0530 Subject: nginx Digest, Vol 139, Issue 21 In-Reply-To: References: Message-ID: Dear Francis, I'm sorry for taking time to reply to this, you were so keen about my problem. Thank you. Actually my problem was when sending *response *to the load balancer from the nginx ( not the request, it should be corrected as the *response *in my previous email). Such as my external load balancer is always doing a health check for my nginx port (80) , below is the *response *message in the /var/log/nginx/access.log against the health check request coming from the external-loadbalancer. [image: image.png] Below is my nginx config file I use for bypass traffic coming from external load-balancer into the nginx port (80) and for bypass that traffic into my app running in the same server (same server as nginx running) as a container (app port 9091) . server { listen 80; server_name 172.25.234.105; error_page 405 =200 $uri; location / { error_page 405 =200 $uri; proxy_pass http://127.0.0.1:9091; auth_basic "PROMETHEUS PUSHGATEWAY Login Area"; auth_basic_user_file /etc/nginx/.htpasswd; } } Please contact me for more information if you need. I believe I can overcome this 405 http header response issue in the nginx config file ? Thanks Amila On Wed, May 19, 2021 at 5:30 PM wrote: > Send nginx mailing list submissions to > nginx at nginx.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://mailman.nginx.org/mailman/listinfo/nginx > or, via email, send a message with subject or body 'help' to > nginx-request at nginx.org > > You can reach the person managing the list at > nginx-owner at nginx.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of nginx digest..." > > > Today's Topics: > > 1. Re: Help: Using Nginx Reverse Proxy bypass traffic in to a > application running in a container (Francis Daly) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Wed, 19 May 2021 09:27:30 +0100 > From: Francis Daly > To: nginx at nginx.org > Subject: Re: Help: Using Nginx Reverse Proxy bypass traffic in to a > application running in a container > Message-ID: <20210519082730.GA11167 at daoine.org> > Content-Type: text/plain; charset=us-ascii > > On Tue, May 18, 2021 at 07:29:20AM +0530, Amila Gunathilaka wrote: > > Hi there, > > I'm not entirely sure what your setup is, so I will describe what I > think you have; please correct me where I am wrong. > > > I have nginx installed on my linux host and* listen on http port 80* and > I > > want to bypass external traffic coming from external load balancer > > (up-stream server) into my *nginx reverse proxy server (80 port) *and > want > > to bypass that http traffic into y application running in a docker > > container (application host port 9091), > > I think you have "the client" (which is "the user with the web browser"); > which makes a http request to "the external load balancer". That talks to > your nginx, which expects a proxy_protocol-then-http request. And nginx > makes a http request to "the container application", on 127.0.0.1:9091 > > > But my nginx configuration file didn't work as it always says *405 method > > not allowed* error when request passing from nginx into the external load > > balancer (up-stream server). > > In nginx terms, in the setup I have described above, "upstream" is "the > container application", not the external load balancer. That won't affect > the problem, but might help searching the web for help. > > So -- can you show an example request that does not give the response > that you want? > > Some thing like > > curl -v http://load-balancer/whatever > > will probably be helpful as a start. Feel free to remove any names or > addresses that you consider private, before pasting the response. > > Right now, it is not clear to me if the 405 is coming from the load > balancer, from nginx, or from the container application. The fix will > likely be different in each case. > > Possibly the logs from each of the servers will indicate how far things > get, and where they first fail. You might spot something obvious in there, > if you can easily find them. > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > ------------------------------ > > End of nginx Digest, Vol 139, Issue 21 > ************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 43017 bytes Desc: not available URL: From nginx-forum at forum.nginx.org Wed May 26 14:08:29 2021 From: nginx-forum at forum.nginx.org (ahoffmann) Date: Wed, 26 May 2021 10:08:29 -0400 Subject: OpenSSL1.0.2 vulnerability with ssl_verify_client disabled Message-ID: <570503bf78be33ea786543559e87f35b.NginxMailingListEnglish@forum.nginx.org> Hello all, regarding more specifically the vulnerability described by CVE-2020-1971, would my NGINX server be affected by this vulnerability even if I have ssl_verify_client off? Many Thanks in advance. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,291662,291662#msg-291662 From nginx-forum at forum.nginx.org Thu May 27 18:55:03 2021 From: nginx-forum at forum.nginx.org (mandela) Date: Thu, 27 May 2021 14:55:03 -0400 Subject: Request for comments on Nginx configuration Message-ID: <978b154f0f2638185fd19dbcca86a663.NginxMailingListEnglish@forum.nginx.org> Hello all. I would like to have comments from the Nginx community on the following configuration: worker_processes auto; error_log /var/www/log/nginx.log; events { multi_accept on; worker_connections 16384; } http { include nginx.deny; include mime.types; default_type application/octet-stream; aio on; sendfile on; tcp_nopush on; gzip on; gzip_comp_level 6; gzip_min_length 1024; gzip_types application/javascript application/json application/xml image/svg+xml image/x-icon text/plain text/css text/xml ; lua_shared_dict dict 16k; log_format main $time_iso8601 ' srs="$status"' ' srt="$request_time"' ' crl="$request"' ' crh="$host"' ' cad="$remote_addr"' ' ssp="$server_port"' ' scs="$upstream_cache_status"' ' sua="$upstream_addr"' ' suc="$upstream_connect_time"' ' sut="$upstream_response_time"' ' sgz="$gzip_ratio"' ' sbs="$body_bytes_sent"' ' cau="$remote_user"' ' ccr="$connection_requests"' ' ccp="$pipe"' ' crs="$scheme"' ' crm="$request_method"' ' cru="$request_uri"' ' crp="$server_protocol"' ' chh="$http_host"' ' cha="$http_user_agent"' ' chr="$http_referer"' ' chf="$http_x_forwarded_for"' ; server_tokens off; reset_timedout_connection on; access_log /var/www/log/access.log main; fastcgi_cache main; fastcgi_cache_key $host:$server_port$uri; fastcgi_cache_methods GET HEAD; fastcgi_ignore_headers Cache-Control Expires; fastcgi_cache_path /tmp/nginx levels=2:2 keys_zone=main:4m inactive=24h ; ssl_certificate /etc/ssl/server.pem; ssl_certificate_key /etc/ssl/server.key; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:4m; ssl_session_timeout 15m; upstream upstream { server unix:/tmp/php-fpm.sock; server 127.0.0.1:9000; server [::1]:9000; } map $http_origin $_origin { default *; '' ''; } server { listen 80; return 301 https://$host$request_uri; } server { listen 443 ssl http2; include nginx.filter; location / { set $_v1 ''; set $_v2 ''; set $_v3 ''; rewrite_by_lua_block { local dict = ngx.shared.dict local host = ngx.var.host local data = dict:get(host) if data == nil then local labels = {} for s in host:gmatch('[^.]+') do table.insert(labels, 1, s) end data = labels[1] or '' local index = 2 while index <= #labels and #data < 7 do data = data .. '/' .. labels[index] index = index + 1 end local f = '/usr/home/www/src/' .. data .. '/app.php' local _, _, code = os.rename(f, f) if code == 2 then return ngx.exit(404) end if labels[index] == 'cdn' then data = data .. '|/tmp/www/cdn/' .. data else data = data .. '|/var/www/pub/' .. table.concat(labels, '/') .. '/-' end data = data .. '|' .. f dict:add(host, data) ngx.log(ngx.ERR, 'dict:add('..host..','..data..')') end local i = 1 for s in data:gmatch('[^|]+') do ngx.var["_v" .. i] = s i = i + 1 end } alias /; try_files $_v2$uri /var/www/pub/$_v1/!$uri /var/www/pub/!$uri @; add_header Access-Control-Allow-Origin $_origin; expires 28d; } location dir: { alias /; index :none; autoindex on; } location file: { alias /; } location @ { fastcgi_param DOCUMENT_ROOT $_v2; fastcgi_param SCRIPT_FILENAME $_v3; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $host; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param REQUEST_SCHEME $scheme; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param REQUEST_URI $request_uri; fastcgi_param QUERY_STRING $query_string; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_pass upstream; } } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,291674,291674#msg-291674 From nginx-forum at forum.nginx.org Thu May 27 18:55:24 2021 From: nginx-forum at forum.nginx.org (Rai Mohammed) Date: Thu, 27 May 2021 14:55:24 -0400 Subject: How to do a large buffer size > 64k uWSGI requests with Nginx proxy | uwsgi request is too big with nginx Message-ID: How to do a large buffer size > 64k uWSGI requests with Nginx proxy Deployment stack : Odoo ERP 12 Python 3.7.10 and Werkzeug 0.16.1 as backend Nginx proxy : 1.20.0 uWSGI : 2.0.19.1 OS : FreeBSD 13.0-RELEASE Nginx throw an alert from uwsgi of request is too big Alert : uwsgi request is too big: 81492, client: 10.29.79.250, server: odoo12ce-erp, request: "GET /web/webclient/..........." As you can see I increased the "uwsgi_buffer_size " in both uwsgi.ini and nginx.conf. Nginx config : `{ # increase the size of the buffers to handle odoo data # Activate uwsgi_buffering uwsgi_buffering on; uwsgi_buffers 16 128k; uwsgi_buffer_size 128k; uwsgi_busy_buffers_size 256k; # uwsgi_max_temp_file_size with zero value disables buffering of responses to temporary files uwsgi_max_temp_file_size 0; uwsgi_temp_file_write_size 256k; uwsgi_read_timeout 900s; uwsgi_connect_timeout 900s; uwsgi_send_timeout 900s; }` uwsgi.ini config : ` [uwsgi] strict = true pcre-jit = true #limit-as = 1024 #never-swap = true pidfile = /var/run/odoo_erp/odoo12ce_uwsgi.pid # safe-pidfile = /var/run/odoo_erp/odoo12ce.pid # Enable REUSE_PORT flag on socket to allow multiple instances binding on the same address (BSD only). reuse-port = true # Testing with www or odoo12ce uid = odoo12ce gid = odoo12ce # To test and verification callable = application # To test and verification #module = odoo.service.wsgi_server:application # enable uwsgi master process master = true lazy = true lazy-apps=true # turn on memory usage report #memory-report=true enable-threads = true threads = 2 thunder-lock = true so-keepalive = true buffer-size = 262144 http-buffer-size = 262144 response-headers-limit = 262144 http-headers-timeout = 900 # set max connections to 1024 in uWSGI listen = 1024 so-send-timeout = 900 socket-send-timeout = 900 so-write-timeout = 900 socket-write-timeout = 900 http-timeout = 900 socket-timeout = 900 wsgi-accept-buffer = true wsgi-accept-buffers = true # clear environment on exit and Delete sockets during shutdown vacuum = true single-interpreter = true # Shutdown when receiving SIGTERM (default is respawn) die-on-term = true need-app = true # Disable built-in logging disable-logging = false # but log 4xx's and 5xx's anyway log-4xx = true log-5xx = true # full path to Odoo12ce project's root directory chdir = /odoo_erp/odoo12ce/odoo12ce_server #chdir2 = = /odoo_erp/odoo12ce/odoo12ce_server pythonpath = /odoo_erp/odoo12ce/odoo12ce_server # odoo12ce's wsgi file wsgi-file = /odoo_erp/odoo12ce/odoo12ce_server/setup/odoo12ce-uwsgi.py #emperor = /odoo_erp/odoo12ce/vassals uwsgi-socket = 127.0.0.1:8070 uwsgi-socket = 127.0.0.1:8170 # daemonize uwsgi and write messages into given log daemonize = /var/log/odoo_erp/odoo12ce/odoo12ce_uwsgi_emperor.log # Restart workers after this many requests max-requests = 2000 # Restart workers after this many seconds max-worker-lifetime = 3600 # Restart workers after this much resident memory reload-on-rss = 2048 # How long to wait before forcefully killing workers worker-reload-mercy = 90 # Maximum number of workers allowed (cpu * 2) processes = 8 ` Posted at Nginx Forum: https://forum.nginx.org/read.php?2,291675,291675#msg-291675 From vbart at nginx.com Thu May 27 19:26:29 2021 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 27 May 2021 22:26:29 +0300 Subject: Unit 1.24.0 release Message-ID: <4650105.31r3eYUQgx@vbart-laptop> Hi, I'm glad to announce a new release of NGINX Unit. This one is full of shiny new features. But before I dive into the details, let me introduce our new developers without whom this release wouldn't be so feature-rich. Please, welcome Zhidao Hong (???) and Ois?n Canty. Zhidao has already been contributing to various nginx open-source projects for years as a community member, and I'm very excited to finally have him on board. Ois?n is a university student who's very interested in Unit; he joined our dev team as an intern and already shown solid coding skills, curiosity, and attention to details, which is so important to our project. Good job! Now, back to the features. I'd like to highlight the first of our improvements in serving static media assets. :: MIME Type Filtering :: Now, you can restrict file serving by MIME type: { "share": "/www/data", "types": [ "image/*", "video/*" ] } The configuration above allows only files with various video and image extensions, but all other requests will return status code 403. In particular, this goes well with the "fallback" option that performs another action if the "share" returns a 40x error: { "share": "/www/data", "types": [ "!application/x-httpd-php" ], "fallback": { "pass": "applications/php" } } Here, all requests to existing files other than ".php" will be served as static content while the rest will be passed to a PHP application. More examples and documentation snippets are available here: - https://unit.nginx.org/configuration/#mime-filtering :: Chrooting and Path Restrictions When Serving Files :: As we take security seriously, now Unit introduces the ability to chroot not only its application processes but also the static files it serves on a per-request basis. Additionally, you can restrict traversal of mounting points and symbolic link resolution: { "share": "/www/data/static/", "chroot": "/www/data/", "follow_symlinks": false, "traverse_mounts": false } See here for more information: - https://unit.nginx.org/configuration/#path-restrictions For details of Unit application process isolation abilities: - https://unit.nginx.org/configuration/#process-isolation Other notable features unrelated to static file serving: * Multiple WSGI/ASGI Python entry points per process It allows loading multiple modules or app entry points into a single Python process, choosing between them when handling requests with the full power of Unit's routes system. See here for Python's "targets" object description: - https://unit.nginx.org/configuration/#configuration-python-targets And here, more info about Unit's internal routing: - https://unit.nginx.org/configuration/#routes * Automatic overloading of "http" and "websocket" modules in Node.js Now you can run Node.js apps on Unit without touching their sources: - https://unit.nginx.org/configuration/#node-js * Applying OpenSSL configuration commands Finally, you can control various TLS settings via OpenSSL's generic configuration interface with all the dynamic power of Unit: - https://unit.nginx.org/configuration/#ssl-tls-configuration The full changelog for the release: Changes with Unit 1.24.0 27 May 2021 *) Change: PHP added to the default MIME type list. *) Feature: arbitrary configuration of TLS connections via OpenSSL commands. *) Feature: the ability to limit static file serving by MIME types. *) Feature: support for chrooting, rejecting symlinks, and rejecting mount point traversal on a per-request basis when serving static files. *) Feature: a loader for automatically overriding the "http" and "websocket" modules in Node.js. *) Feature: multiple "targets" in Python applications. *) Feature: compatibility with Ruby 3.0. *) Bugfix: the router process could crash while closing a TLS connection. *) Bugfix: a segmentation fault might have occurred in the PHP module if fastcgi_finish_request() was used with the "auto_globals_jit" option enabled. That's all for today, but even more exciting features are poised for the upcoming releases: - statistics API - process control API - variables from regexp captures in the "match" object - simple request rewrites using variables - variables support in static file serving options - ability to override client IP from the X-Forwarded-For header - TLS sessions cache and tickets Also, please check our GitHub to follow the development and discuss new features: - https://github.com/nginx/unit Stay tuned! wbr, Valentin V. Bartenev From mdounin at mdounin.ru Thu May 27 23:53:53 2021 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 28 May 2021 02:53:53 +0300 Subject: How to do a large buffer size > 64k uWSGI requests with Nginx proxy | uwsgi request is too big with nginx In-Reply-To: References: Message-ID: Hello! On Thu, May 27, 2021 at 02:55:24PM -0400, Rai Mohammed wrote: > How to do a large buffer size > 64k uWSGI requests with Nginx proxy > > Deployment stack : > Odoo ERP 12 > Python 3.7.10 and Werkzeug 0.16.1 as backend > Nginx proxy : 1.20.0 > uWSGI : 2.0.19.1 > OS : FreeBSD 13.0-RELEASE > > Nginx throw an alert from uwsgi of request is too big > Alert : uwsgi request is too big: 81492, client: 10.29.79.250, server: > odoo12ce-erp, request: "GET /web/webclient/..........." > > As you can see I increased the "uwsgi_buffer_size " in both uwsgi.ini and > nginx.conf. The uwsgi protocol uses 16-bit datasize field[1], and this limits maximum size of all headers in a request to uwsgi backends. The error message from nginx suggests you are hitting this limit. Unfortunately, using larger buffers won't help here. In most cases such a huge request headers indicate that there is a bug somewhere. For example, nginx by default limits total size of request headers to 32k (see [2]). Similar 64k limit also exists in FastCGI (though with protocol clearly defining how to provide additional data if needed, just not implemented in nginx), and the only case when it was questioned was due to a miscoded client (see [3]). If nevertheless such a huge request headers are intentional, the most simple solution probably would be to switch to a different protocol, such as HTTP. [1] https://uwsgi-docs.readthedocs.io/en/latest/Protocol.html [2] http://nginx.org/r/large_client_header_buffers [3] https://trac.nginx.org/nginx/ticket/239 -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Fri May 28 01:59:13 2021 From: nginx-forum at forum.nginx.org (Rai Mohammed) Date: Thu, 27 May 2021 21:59:13 -0400 Subject: How to do a large buffer size > 64k uWSGI requests with Nginx proxy | uwsgi request is too big with nginx In-Reply-To: References: Message-ID: <06b5de642b1389c66bdbe8e193824603.NginxMailingListEnglish@forum.nginx.org> Hello, Yes I have searched the request that generates this big size header, and it's a Get URI pulling all the features installed and requested by the user of the ERP. Before I integrate the uWSGI layer, the stack deployment with Nginx KTLS HTTP2 works perfectly and there's no problem of buffer sizing. The reason why I added the uWSGI layer, is for using the uwsgi-socket binary protocol, and it work fine and very fast decreasing the time load for the principal web page. So for now I have to switch to the http-socket protocol and configuring HTTP2 in uWSGI. I hope in the future Nginx will allow using the huge headers sizing. Thanks for your reply and clarifications. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,291675,291680#msg-291680 From francis at daoine.org Fri May 28 08:00:00 2021 From: francis at daoine.org (Francis Daly) Date: Fri, 28 May 2021 09:00:00 +0100 Subject: Help: Using Nginx Reverse Proxy bypass traffic in to a application running in a container In-Reply-To: References: Message-ID: <20210528080000.GC11167@daoine.org> On Tue, May 25, 2021 at 09:47:47PM +0530, Amila Gunathilaka wrote: Hi there, > I'm sorry for taking time to reply to this, you were so keen about my > problem. Thank you. No worries at all -- the mailing list is not an immediate-response medium. > Actually my problem was when sending *response *to the load balancer from > the nginx ( not the request, it should be corrected as the *response *in my > previous email). > Such as my external load balancer is always doing a health check for my > nginx port (80) , below is the *response *message in the > /var/log/nginx/access.log against the health check request coming from > the external-loadbalancer. As I understand it, the load balancer is making the request "OPTIONS /" to nginx, and nginx is responding with a http 405, and you don't want nginx to do that. What response do you want nginx to give to the request? Your config make it look like nginx is told to proxy_pass the OPTIONS request to your port 9091 server, so I presume that your port 9091 server is responding 405 to the OPTIONS request and nginx is passing the response from the 9091-upstream to the load-balancer client. Your port 9091 logs or traffic analysis should show that that is the case. If is the case, you *could* fix it by telling your 9091-upstream to respond how you want it to to the "OPTIONS /" request (using its config); or you could configure nginx to intercept the request and handle it itself, without proxy_pass'ing it The first case would mean that the "health check" is actually testing the full nginx-to-upstream chain; the second would have it only testing that nginx is responding. If you decide that you want nginx to handle this request itself, and to respond with a http 204, you could add something like if ($request_method = "OPTIONS") { return 204; } inside the "location /" block. (Strictly: that would tell nginx to handle all "OPTIONS /anything" requests, not just "OPTIONS /".) You would not need the error_page directives that you show. You could instead add a new "location = /" block, and do the OPTIONS check there; but you would probably also have to duplicate the three other lines from the "location /" block -- sometimes people prefer "tidy-looking" configuration over "correctness and probable machine efficiency". Pick which you like; if you do not measure a difference, there is not a difference that you care about. That is, you want either one location: > server { > listen 80; > server_name 172.25.234.105; > location / { if ($request_method = "OPTIONS") { return 204; } > proxy_pass http://127.0.0.1:9091; > auth_basic "PROMETHEUS PUSHGATEWAY Login Area"; > auth_basic_user_file /etc/nginx/.htpasswd; > } > } or two locations: location = / { if ($request_method = "OPTIONS") { return 204; } proxy_pass http://127.0.0.1:9091; auth_basic "PROMETHEUS PUSHGATEWAY Login Area"; auth_basic_user_file /etc/nginx/.htpasswd; } location / { proxy_pass http://127.0.0.1:9091; auth_basic "PROMETHEUS PUSHGATEWAY Login Area"; auth_basic_user_file /etc/nginx/.htpasswd; } (and, if you use the two, you could potentially move the "auth_basic" and "auth_basic_user_file" outside the "location", to be directly within "server"; that does depend on what else is in your config file.) If you want something else in the response to the OPTIONS request, you can change the "return" response code, or "add_header" and the like. Good luck with it, f -- Francis Daly francis at daoine.org From amila.kdam at gmail.com Sat May 29 13:41:38 2021 From: amila.kdam at gmail.com (Amila Gunathilaka) Date: Sat, 29 May 2021 19:11:38 +0530 Subject: Help: Using Nginx Reverse Proxy bypass traffic in to a application running in a container In-Reply-To: References: Message-ID: Hi Francis, Thanks for your reply email, here are my findings and progress and current situation about my issue. First of all I will answer your questions and then you will have a better idea to guide me and whether I'm on the correct path. >As I understand it, the load balancer is making the request "OPTIONS /" >to nginx, and nginx is responding with a http 405, and you don't want >nginx to do that. >What response do you want nginx to give to the request? Yes you are absolutely right I wanted nginx to stop that 405 response and give the success response 200 or even 401 which I can confirm my proxy pass and basic auth is working. Also I think that 405 response is coming *from nginx itself *to the external load balancer because external load balancer directly communicating with the nginx (80) and also my upstream server (9091 port server) is not a webapp it's just a binary file running inside docker container. But anyway maybe it's coming from the 9091 server app. You could investigate the 9091 server app here if you are interested - https://hub.docker.com/r/prom/pushgateway/tags?page=1&ordering=last_updated . Let me know me as well, weather it's the 9091 app or nginx causing problems if you can find out. Anyway I thought to fix the OPTIONS method fix on the external load balancer itself , and I logged in to my external load balancer configs page and I changed the HTTP health checks using OPTIONS into *GET * method. ANd yeah now 405 error gone. But now I'm getting 401 responses , which should be the correct response since I'm using a basic auth in my nginx.conf file. Below is my nginx.conf FYI worker_rlimit_nofile 30000; events { worker_connections 30000; } http { #upstream pushgateway_upstreams { # server 127.0.0.1:9091; # } server { listen 172.25.234.105:80; #server_name 172.25.234.105; location /metrics { proxy_pass http://127.0.0.1:9091/metrics; auth_basic "PROMETHEUS PUSHGATEWAY Login Area"; auth_basic_user_file /etc/nginx/.htpasswd; } } } * So I can confirm that proxy_pass is working because when I browse my application it returns the 401 message now. *curl -v http://172.25.234.105:80/metrics * * Trying 172.25.234.105:80... * TCP_NODELAY set * Connected to 172.25.234.105 (172.25.234.105) port 80 (#0) > GET /metrics HTTP/1.1 > Host: 172.25.234.105 > User-Agent: curl/7.68.0 > Accept: */* > * Mark bundle as not supporting multiuse < HTTP/1.1 401 Unauthorized < Server: nginx/1.18.0 (Ubuntu) < Date: Sat, 29 May 2021 13:29:00 GMT < Content-Type: text/html < Content-Length: 188 < Connection: keep-alive < WWW-Authenticate: Basic realm="PROMETHEUS PUSHGATEWAY Login Area" < *401 *Authorization Required

*401* Authorization Required


nginx/1.18.0 (Ubuntu)
Seems like everything is fine for now. Any questions or any enhancements are welcome. Thanks Francis. Thanks Amila Devops Engineer On Fri, May 28, 2021 at 1:30 PM wrote: > Send nginx mailing list submissions to > nginx at nginx.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://mailman.nginx.org/mailman/listinfo/nginx > or, via email, send a message with subject or body 'help' to > nginx-request at nginx.org > > You can reach the person managing the list at > nginx-owner at nginx.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of nginx digest..." > > > Today's Topics: > > 1. Request for comments on Nginx configuration (mandela) > 2. How to do a large buffer size > 64k uWSGI requests with Nginx > proxy | uwsgi request is too big with nginx (Rai Mohammed) > 3. Unit 1.24.0 release (Valentin V. Bartenev) > 4. Re: How to do a large buffer size > 64k uWSGI requests with > Nginx proxy | uwsgi request is too big with nginx (Maxim Dounin) > 5. Re: How to do a large buffer size > 64k uWSGI requests with > Nginx proxy | uwsgi request is too big with nginx (Rai Mohammed) > 6. Re: Help: Using Nginx Reverse Proxy bypass traffic in to a > application running in a container (Francis Daly) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Thu, 27 May 2021 14:55:03 -0400 > From: "mandela" > To: nginx at nginx.org > Subject: Request for comments on Nginx configuration > Message-ID: > < > 978b154f0f2638185fd19dbcca86a663.NginxMailingListEnglish at forum.nginx.org> > > Content-Type: text/plain; charset=UTF-8 > > Hello all. I would like to have comments from the Nginx community on the > following configuration: > > worker_processes auto; > error_log /var/www/log/nginx.log; > > events { > multi_accept on; > worker_connections 16384; > } > > http { > include nginx.deny; > include mime.types; > default_type application/octet-stream; > aio on; > sendfile on; > tcp_nopush on; > gzip on; > gzip_comp_level 6; > gzip_min_length 1024; > gzip_types > application/javascript > application/json > application/xml > image/svg+xml > image/x-icon > text/plain > text/css > text/xml > ; > lua_shared_dict dict 16k; > log_format main $time_iso8601 > ' srs="$status"' > ' srt="$request_time"' > ' crl="$request"' > ' crh="$host"' > ' cad="$remote_addr"' > ' ssp="$server_port"' > ' scs="$upstream_cache_status"' > ' sua="$upstream_addr"' > ' suc="$upstream_connect_time"' > ' sut="$upstream_response_time"' > ' sgz="$gzip_ratio"' > ' sbs="$body_bytes_sent"' > ' cau="$remote_user"' > ' ccr="$connection_requests"' > ' ccp="$pipe"' > ' crs="$scheme"' > ' crm="$request_method"' > ' cru="$request_uri"' > ' crp="$server_protocol"' > ' chh="$http_host"' > ' cha="$http_user_agent"' > ' chr="$http_referer"' > ' chf="$http_x_forwarded_for"' > ; > server_tokens off; > reset_timedout_connection on; > access_log /var/www/log/access.log main; > > fastcgi_cache main; > fastcgi_cache_key $host:$server_port$uri; > fastcgi_cache_methods GET HEAD; > fastcgi_ignore_headers Cache-Control Expires; > fastcgi_cache_path /tmp/nginx > levels=2:2 > keys_zone=main:4m > inactive=24h > ; > ssl_certificate /etc/ssl/server.pem; > ssl_certificate_key /etc/ssl/server.key; > ssl_prefer_server_ciphers on; > ssl_session_cache shared:SSL:4m; > ssl_session_timeout 15m; > > upstream upstream { > server unix:/tmp/php-fpm.sock; > server 127.0.0.1:9000; > server [::1]:9000; > } > > map $http_origin $_origin { > default *; > '' ''; > } > > server { > listen 80; > return 301 https://$host$request_uri; > } > > server { > listen 443 ssl http2; > include nginx.filter; > location / { > set $_v1 ''; > set $_v2 ''; > set $_v3 ''; > rewrite_by_lua_block { > local dict = ngx.shared.dict > local host = ngx.var.host > local data = dict:get(host) > if data == nil then > local labels = {} > for s in host:gmatch('[^.]+') do > table.insert(labels, 1, s) > end > data = labels[1] or '' > local index = 2 > while index <= #labels and #data < > 7 do > data = data .. '/' .. > labels[index] > index = index + 1 > end > local f = '/usr/home/www/src/' .. > data .. '/app.php' > local _, _, code = os.rename(f, f) > if code == 2 then > return ngx.exit(404) > end > if labels[index] == 'cdn' then > data = data .. > '|/tmp/www/cdn/' .. data > else > data = data .. > '|/var/www/pub/' > .. > table.concat(labels, '/') .. '/-' > end > data = data .. '|' .. f > dict:add(host, data) > ngx.log(ngx.ERR, > 'dict:add('..host..','..data..')') > end > local i = 1 > for s in data:gmatch('[^|]+') do > ngx.var["_v" .. i] = s > i = i + 1 > end > } > alias /; > try_files > $_v2$uri > /var/www/pub/$_v1/!$uri > /var/www/pub/!$uri > @; > add_header Access-Control-Allow-Origin $_origin; > expires 28d; > } > location dir: { > alias /; > index :none; > autoindex on; > } > location file: { > alias /; > } > location @ { > fastcgi_param DOCUMENT_ROOT $_v2; > fastcgi_param SCRIPT_FILENAME $_v3; > fastcgi_param SCRIPT_NAME $fastcgi_script_name; > fastcgi_param SERVER_PROTOCOL $server_protocol; > fastcgi_param SERVER_ADDR $server_addr; > fastcgi_param SERVER_PORT $server_port; > fastcgi_param SERVER_NAME $host; > fastcgi_param REMOTE_ADDR $remote_addr; > fastcgi_param REMOTE_PORT $remote_port; > fastcgi_param REQUEST_SCHEME $scheme; > fastcgi_param REQUEST_METHOD $request_method; > fastcgi_param REQUEST_URI $request_uri; > fastcgi_param QUERY_STRING $query_string; > fastcgi_param CONTENT_TYPE $content_type; > fastcgi_param CONTENT_LENGTH $content_length; > fastcgi_pass upstream; > } > } > } > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,291674,291674#msg-291674 > > > > ------------------------------ > > Message: 2 > Date: Thu, 27 May 2021 14:55:24 -0400 > From: "Rai Mohammed" > To: nginx at nginx.org > Subject: How to do a large buffer size > 64k uWSGI requests with Nginx > proxy | uwsgi request is too big with nginx > Message-ID: > < > c303563b562a6243138081c03cc4c7ff.NginxMailingListEnglish at forum.nginx.org> > > Content-Type: text/plain; charset=UTF-8 > > How to do a large buffer size > 64k uWSGI requests with Nginx proxy > > Deployment stack : > Odoo ERP 12 > Python 3.7.10 and Werkzeug 0.16.1 as backend > Nginx proxy : 1.20.0 > uWSGI : 2.0.19.1 > OS : FreeBSD 13.0-RELEASE > > Nginx throw an alert from uwsgi of request is too big > Alert : uwsgi request is too big: 81492, client: 10.29.79.250, server: > odoo12ce-erp, request: "GET /web/webclient/..........." > > As you can see I increased the "uwsgi_buffer_size " in both uwsgi.ini and > nginx.conf. > > Nginx config : > `{ > > # increase the size of the buffers to handle odoo data > # Activate uwsgi_buffering > uwsgi_buffering on; > uwsgi_buffers 16 128k; > uwsgi_buffer_size 128k; > uwsgi_busy_buffers_size 256k; > # uwsgi_max_temp_file_size with zero value disables buffering of > responses to temporary files > uwsgi_max_temp_file_size 0; > uwsgi_temp_file_write_size 256k; > > uwsgi_read_timeout 900s; > uwsgi_connect_timeout 900s; > uwsgi_send_timeout 900s; > > }` > > uwsgi.ini config : > > ` > > [uwsgi] > strict = true > pcre-jit = true > #limit-as = 1024 > #never-swap = true > > pidfile = /var/run/odoo_erp/odoo12ce_uwsgi.pid > # safe-pidfile = /var/run/odoo_erp/odoo12ce.pid > > # Enable REUSE_PORT flag on socket to allow multiple instances binding > on the same address (BSD only). > reuse-port = true > > # Testing with www or odoo12ce > uid = odoo12ce > gid = odoo12ce > > # To test and verification > callable = application > # To test and verification > #module = odoo.service.wsgi_server:application > > # enable uwsgi master process > master = true > lazy = true > lazy-apps=true > > # turn on memory usage report > #memory-report=true > > enable-threads = true > threads = 2 > thunder-lock = true > so-keepalive = true > > buffer-size = 262144 > http-buffer-size = 262144 > > response-headers-limit = 262144 > http-headers-timeout = 900 > # set max connections to 1024 in uWSGI > listen = 1024 > > so-send-timeout = 900 > socket-send-timeout = 900 > so-write-timeout = 900 > socket-write-timeout = 900 > > http-timeout = 900 > socket-timeout = 900 > > wsgi-accept-buffer = true > wsgi-accept-buffers = true > # clear environment on exit and Delete sockets during shutdown > vacuum = true > single-interpreter = true > > # Shutdown when receiving SIGTERM (default is respawn) > die-on-term = true > need-app = true > > # Disable built-in logging > disable-logging = false > > # but log 4xx's and 5xx's anyway > log-4xx = true > log-5xx = true > > # full path to Odoo12ce project's root directory > chdir = /odoo_erp/odoo12ce/odoo12ce_server > #chdir2 = = /odoo_erp/odoo12ce/odoo12ce_server > > pythonpath = /odoo_erp/odoo12ce/odoo12ce_server > > # odoo12ce's wsgi file > wsgi-file = /odoo_erp/odoo12ce/odoo12ce_server/setup/odoo12ce-uwsgi.py > > #emperor = /odoo_erp/odoo12ce/vassals > > uwsgi-socket = 127.0.0.1:8070 > uwsgi-socket = 127.0.0.1:8170 > > # daemonize uwsgi and write messages into given log > daemonize = /var/log/odoo_erp/odoo12ce/odoo12ce_uwsgi_emperor.log > > # Restart workers after this many requests > max-requests = 2000 > > # Restart workers after this many seconds > max-worker-lifetime = 3600 > > # Restart workers after this much resident memory > reload-on-rss = 2048 > > # How long to wait before forcefully killing workers > worker-reload-mercy = 90 > > # Maximum number of workers allowed (cpu * 2) > processes = 8 > > ` > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,291675,291675#msg-291675 > > > > ------------------------------ > > Message: 3 > Date: Thu, 27 May 2021 22:26:29 +0300 > From: "Valentin V. Bartenev" > To: nginx at nginx.org > Subject: Unit 1.24.0 release > Message-ID: <4650105.31r3eYUQgx at vbart-laptop> > Content-Type: text/plain; charset="UTF-8" > > Hi, > > I'm glad to announce a new release of NGINX Unit. > > This one is full of shiny new features. But before I dive into the > details, > let me introduce our new developers without whom this release wouldn't be > so > feature-rich. Please, welcome Zhidao Hong (???) and Ois?n Canty. > > Zhidao has already been contributing to various nginx open-source projects > for > years as a community member, and I'm very excited to finally have him on > board. > > Ois?n is a university student who's very interested in Unit; he joined our > dev > team as an intern and already shown solid coding skills, curiosity, and > attention to details, which is so important to our project. Good job! > > > Now, back to the features. I'd like to highlight the first of our > improvements > in serving static media assets. > > :: MIME Type Filtering :: > > Now, you can restrict file serving by MIME type: > > { > "share": "/www/data", > "types": [ "image/*", "video/*" ] > } > > The configuration above allows only files with various video and image > extensions, but all other requests will return status code 403. > > In particular, this goes well with the "fallback" option that performs > another > action if the "share" returns a 40x error: > > { > "share": "/www/data", > "types": [ "!application/x-httpd-php" ], > > "fallback": { > "pass": "applications/php" > } > } > > Here, all requests to existing files other than ".php" will be served as > static > content while the rest will be passed to a PHP application. > > More examples and documentation snippets are available here: > > - https://unit.nginx.org/configuration/#mime-filtering > > > :: Chrooting and Path Restrictions When Serving Files :: > > As we take security seriously, now Unit introduces the ability to chroot > not only its application processes but also the static files it serves on > a per-request basis. Additionally, you can restrict traversal of mounting > points and symbolic link resolution: > > { > "share": "/www/data/static/", > "chroot": "/www/data/", > "follow_symlinks": false, > "traverse_mounts": false > } > > See here for more information: > > - https://unit.nginx.org/configuration/#path-restrictions > > For details of Unit application process isolation abilities: > > - https://unit.nginx.org/configuration/#process-isolation > > > Other notable features unrelated to static file serving: > > * Multiple WSGI/ASGI Python entry points per process > > It allows loading multiple modules or app entry points into a single > Python > process, choosing between them when handling requests with the full > power of > Unit's routes system. > > See here for Python's "targets" object description: > > - https://unit.nginx.org/configuration/#configuration-python-targets > > And here, more info about Unit's internal routing: > > - https://unit.nginx.org/configuration/#routes > > > * Automatic overloading of "http" and "websocket" modules in Node.js > > Now you can run Node.js apps on Unit without touching their sources: > > - https://unit.nginx.org/configuration/#node-js > > > * Applying OpenSSL configuration commands > > Finally, you can control various TLS settings via OpenSSL's generic > configuration interface with all the dynamic power of Unit: > > - https://unit.nginx.org/configuration/#ssl-tls-configuration > > > The full changelog for the release: > > Changes with Unit 1.24.0 27 May > 2021 > > *) Change: PHP added to the default MIME type list. > > *) Feature: arbitrary configuration of TLS connections via OpenSSL > commands. > > *) Feature: the ability to limit static file serving by MIME types. > > *) Feature: support for chrooting, rejecting symlinks, and rejecting > mount point traversal on a per-request basis when serving static > files. > > *) Feature: a loader for automatically overriding the "http" and > "websocket" modules in Node.js. > > *) Feature: multiple "targets" in Python applications. > > *) Feature: compatibility with Ruby 3.0. > > *) Bugfix: the router process could crash while closing a TLS > connection. > > *) Bugfix: a segmentation fault might have occurred in the PHP module > if > fastcgi_finish_request() was used with the "auto_globals_jit" option > enabled. > > > That's all for today, but even more exciting features are poised for the > upcoming releases: > > - statistics API > - process control API > - variables from regexp captures in the "match" object > - simple request rewrites using variables > - variables support in static file serving options > - ability to override client IP from the X-Forwarded-For header > - TLS sessions cache and tickets > > Also, please check our GitHub to follow the development and discuss new > features: > > - https://github.com/nginx/unit > > Stay tuned! > > wbr, Valentin V. Bartenev > > > > > > ------------------------------ > > Message: 4 > Date: Fri, 28 May 2021 02:53:53 +0300 > From: Maxim Dounin > To: nginx at nginx.org > Subject: Re: How to do a large buffer size > 64k uWSGI requests with > Nginx proxy | uwsgi request is too big with nginx > Message-ID: > Content-Type: text/plain; charset=us-ascii > > Hello! > > On Thu, May 27, 2021 at 02:55:24PM -0400, Rai Mohammed wrote: > > > How to do a large buffer size > 64k uWSGI requests with Nginx proxy > > > > Deployment stack : > > Odoo ERP 12 > > Python 3.7.10 and Werkzeug 0.16.1 as backend > > Nginx proxy : 1.20.0 > > uWSGI : 2.0.19.1 > > OS : FreeBSD 13.0-RELEASE > > > > Nginx throw an alert from uwsgi of request is too big > > Alert : uwsgi request is too big: 81492, client: 10.29.79.250, server: > > odoo12ce-erp, request: "GET /web/webclient/..........." > > > > As you can see I increased the "uwsgi_buffer_size " in both uwsgi.ini and > > nginx.conf. > > The uwsgi protocol uses 16-bit datasize field[1], and this limits > maximum size of all headers in a request to uwsgi backends. The > error message from nginx suggests you are hitting this limit. > Unfortunately, using larger buffers won't help here. > > In most cases such a huge request headers indicate that there is a > bug somewhere. For example, nginx by default limits total size of > request headers to 32k (see [2]). Similar 64k limit also exists > in FastCGI (though with protocol clearly defining how to provide > additional data if needed, just not implemented in nginx), and the > only case when it was questioned was due to a miscoded client (see > [3]). > > If nevertheless such a huge request headers are intentional, the > most simple solution probably would be to switch to a different > protocol, such as HTTP. > > [1] https://uwsgi-docs.readthedocs.io/en/latest/Protocol.html > [2] http://nginx.org/r/large_client_header_buffers > [3] https://trac.nginx.org/nginx/ticket/239 > > -- > Maxim Dounin > http://mdounin.ru/ > > > ------------------------------ > > Message: 5 > Date: Thu, 27 May 2021 21:59:13 -0400 > From: "Rai Mohammed" > To: nginx at nginx.org > Subject: Re: How to do a large buffer size > 64k uWSGI requests with > Nginx proxy | uwsgi request is too big with nginx > Message-ID: > < > 06b5de642b1389c66bdbe8e193824603.NginxMailingListEnglish at forum.nginx.org> > > Content-Type: text/plain; charset=UTF-8 > > Hello, > Yes I have searched the request that generates this big size header, and > it's a Get URI > pulling all the features installed and requested by the user of the ERP. > > Before I integrate the uWSGI layer, the stack deployment with Nginx KTLS > HTTP2 works perfectly and there's > no problem of buffer sizing. > The reason why I added the uWSGI layer, is for using the uwsgi-socket > binary > protocol, and it work fine and very fast > decreasing the time load for the principal web page. > So for now I have to switch to the http-socket protocol and configuring > HTTP2 in uWSGI. > I hope in the future Nginx will allow using the huge headers sizing. > > Thanks for your reply and clarifications. > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,291675,291680#msg-291680 > > > > ------------------------------ > > Message: 6 > Date: Fri, 28 May 2021 09:00:00 +0100 > From: Francis Daly > To: nginx at nginx.org > Subject: Re: Help: Using Nginx Reverse Proxy bypass traffic in to a > application running in a container > Message-ID: <20210528080000.GC11167 at daoine.org> > Content-Type: text/plain; charset=us-ascii > > On Tue, May 25, 2021 at 09:47:47PM +0530, Amila Gunathilaka wrote: > > Hi there, > > > I'm sorry for taking time to reply to this, you were so keen about my > > problem. Thank you. > > No worries at all -- the mailing list is not an immediate-response medium. > > > Actually my problem was when sending *response *to the load balancer from > > the nginx ( not the request, it should be corrected as the *response *in > my > > previous email). > > Such as my external load balancer is always doing a health check for my > > nginx port (80) , below is the *response *message in the > > /var/log/nginx/access.log against the health check request coming from > > the external-loadbalancer. > > As I understand it, the load balancer is making the request "OPTIONS /" > to nginx, and nginx is responding with a http 405, and you don't want > nginx to do that. > > What response do you want nginx to give to the request? > > Your config make it look like nginx is told to proxy_pass the OPTIONS > request to your port 9091 server, so I presume that your port 9091 server > is responding 405 to the OPTIONS request and nginx is passing the response > from the 9091-upstream to the load-balancer client. > > Your port 9091 logs or traffic analysis should show that that is the case. > > If is the case, you *could* fix it by telling your 9091-upstream to respond > how you want it to to the "OPTIONS /" request (using its config); or > you could configure nginx to intercept the request and handle it itself, > without proxy_pass'ing it > > > The first case would mean that the "health check" is actually testing > the full nginx-to-upstream chain; the second would have it only testing > that nginx is responding. > > If you decide that you want nginx to handle this request itself, and to > respond with a http 204, you could add something like > > if ($request_method = "OPTIONS") { return 204; } > > inside the "location /" block. > > (Strictly: that would tell nginx to handle all "OPTIONS /anything" > requests, not just "OPTIONS /".) > > You would not need the error_page directives that you show. > > > You could instead add a new "location = /" block, and do the OPTIONS > check there; but you would probably also have to duplicate the three > other lines from the "location /" block -- sometimes people prefer > "tidy-looking" configuration over "correctness and probable machine > efficiency". Pick which you like; if you do not measure a difference, > there is not a difference that you care about. > > That is, you want either one location: > > > server { > > listen 80; > > server_name 172.25.234.105; > > > location / { > > if ($request_method = "OPTIONS") { return 204; } > > > proxy_pass http://127.0.0.1:9091; > > auth_basic "PROMETHEUS PUSHGATEWAY Login Area"; > > auth_basic_user_file /etc/nginx/.htpasswd; > > } > > } > > or two locations: > > location = / { > if ($request_method = "OPTIONS") { return 204; } > proxy_pass http://127.0.0.1:9091; > auth_basic "PROMETHEUS PUSHGATEWAY Login Area"; > auth_basic_user_file /etc/nginx/.htpasswd; > } > > location / { > proxy_pass http://127.0.0.1:9091; > auth_basic "PROMETHEUS PUSHGATEWAY Login Area"; > auth_basic_user_file /etc/nginx/.htpasswd; > } > > (and, if you use the two, you could potentially move the "auth_basic" > and "auth_basic_user_file" outside the "location", to be directly within > "server"; that does depend on what else is in your config file.) > > If you want something else in the response to the OPTIONS request, > you can change the "return" response code, or "add_header" and the like. > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > ------------------------------ > > End of nginx Digest, Vol 139, Issue 28 > ************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amila.kdam at gmail.com Sat May 29 14:16:40 2021 From: amila.kdam at gmail.com (Amila Gunathilaka) Date: Sat, 29 May 2021 19:46:40 +0530 Subject: nginx Digest, Vol 139, Issue 29 In-Reply-To: References: Message-ID: Dear Francis, This is my following email, I just want to add something into my previous email. My concern is why nginx still gives 401 responses *unless *my nginx.conf has a basic authentication user name and password file in the location /etc/nginx/.htpasswd. It says still not authenticate my external client POST requests yet ? Any thoughts? Below is the screenshot of the /var/log/nginx/access.log file output. [image: image.png] Thank you Amila On Sat, May 29, 2021 at 7:12 PM wrote: > Send nginx mailing list submissions to > nginx at nginx.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://mailman.nginx.org/mailman/listinfo/nginx > or, via email, send a message with subject or body 'help' to > nginx-request at nginx.org > > You can reach the person managing the list at > nginx-owner at nginx.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of nginx digest..." > > > Today's Topics: > > 1. Re: Help: Using Nginx Reverse Proxy bypass traffic in to a > application running in a container (Amila Gunathilaka) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Sat, 29 May 2021 19:11:38 +0530 > From: Amila Gunathilaka > To: nginx at nginx.org, francis at daoine.org > Subject: Re: Help: Using Nginx Reverse Proxy bypass traffic in to a > application running in a container > Message-ID: > < > CALqQtdy2RvkhHjAYQkZU-EhzOZG+9fnG-GXE9wuWzM-SQTbNjg at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > Hi Francis, > > Thanks for your reply email, here are my findings and progress and current > situation about my issue. First of all I will answer your questions and > then you will have a better idea to guide me and whether I'm on the correct > path. > > >As I understand it, the load balancer is making the request "OPTIONS /" > >to nginx, and nginx is responding with a http 405, and you don't want > >nginx to do that. > > >What response do you want nginx to give to the request? > > Yes you are absolutely right I wanted nginx to stop that 405 response and > give the success response 200 or even 401 which I can confirm my proxy pass > and basic auth is working. > > Also I think that 405 response is coming *from nginx itself *to the > external load balancer because external load balancer directly > communicating with the nginx (80) and also my upstream server (9091 port > server) is not a webapp it's just a binary file running inside docker > container. But anyway maybe it's coming from the 9091 server app. You > could investigate the 9091 server app here if you are interested - > https://hub.docker.com/r/prom/pushgateway/tags?page=1&ordering=last_updated > . > Let me know me as well, weather it's the 9091 app or nginx causing problems > if you can find out. > > > Anyway I thought to fix the OPTIONS method fix on the external load > balancer itself , and I logged in to my external load balancer configs > page and I changed the HTTP health checks using OPTIONS into *GET * > method. > ANd yeah now 405 error gone. But now I'm getting 401 responses , which > should be the correct response since I'm using a basic auth in my > nginx.conf file. Below is my nginx.conf FYI > > worker_rlimit_nofile 30000; > events { > worker_connections 30000; > } > > http { > > #upstream pushgateway_upstreams { > # server 127.0.0.1:9091; > # } > server { > listen 172.25.234.105:80; > #server_name 172.25.234.105; > location /metrics { > proxy_pass http://127.0.0.1:9091/metrics; > auth_basic "PROMETHEUS PUSHGATEWAY Login Area"; > auth_basic_user_file /etc/nginx/.htpasswd; > } > > } > } > > > * So I can confirm that proxy_pass is working because when I browse my > application it returns the 401 message now. > > > *curl -v http://172.25.234.105:80/metrics > * > * Trying 172.25.234.105:80... > * TCP_NODELAY set > * Connected to 172.25.234.105 (172.25.234.105) port 80 (#0) > > GET /metrics HTTP/1.1 > > Host: 172.25.234.105 > > User-Agent: curl/7.68.0 > > Accept: */* > > > * Mark bundle as not supporting multiuse > < HTTP/1.1 401 Unauthorized > < Server: nginx/1.18.0 (Ubuntu) > < Date: Sat, 29 May 2021 13:29:00 GMT > < Content-Type: text/html > < Content-Length: 188 > < Connection: keep-alive > < WWW-Authenticate: Basic realm="PROMETHEUS PUSHGATEWAY Login Area" > < > > *401 *Authorization Required > >

*401* Authorization Required

>
nginx/1.18.0 (Ubuntu)
> > > > > Seems like everything is fine for now. Any questions or any enhancements > are welcome. Thanks Francis. > > > Thanks > > Amila > Devops Engineer > > > > > > > On Fri, May 28, 2021 at 1:30 PM wrote: > > > Send nginx mailing list submissions to > > nginx at nginx.org > > > > To subscribe or unsubscribe via the World Wide Web, visit > > http://mailman.nginx.org/mailman/listinfo/nginx > > or, via email, send a message with subject or body 'help' to > > nginx-request at nginx.org > > > > You can reach the person managing the list at > > nginx-owner at nginx.org > > > > When replying, please edit your Subject line so it is more specific > > than "Re: Contents of nginx digest..." > > > > > > Today's Topics: > > > > 1. Request for comments on Nginx configuration (mandela) > > 2. How to do a large buffer size > 64k uWSGI requests with Nginx > > proxy | uwsgi request is too big with nginx (Rai Mohammed) > > 3. Unit 1.24.0 release (Valentin V. Bartenev) > > 4. Re: How to do a large buffer size > 64k uWSGI requests with > > Nginx proxy | uwsgi request is too big with nginx (Maxim Dounin) > > 5. Re: How to do a large buffer size > 64k uWSGI requests with > > Nginx proxy | uwsgi request is too big with nginx (Rai Mohammed) > > 6. Re: Help: Using Nginx Reverse Proxy bypass traffic in to a > > application running in a container (Francis Daly) > > > > > > ---------------------------------------------------------------------- > > > > Message: 1 > > Date: Thu, 27 May 2021 14:55:03 -0400 > > From: "mandela" > > To: nginx at nginx.org > > Subject: Request for comments on Nginx configuration > > Message-ID: > > < > > 978b154f0f2638185fd19dbcca86a663.NginxMailingListEnglish at forum.nginx.org > > > > > > Content-Type: text/plain; charset=UTF-8 > > > > Hello all. I would like to have comments from the Nginx community on the > > following configuration: > > > > worker_processes auto; > > error_log /var/www/log/nginx.log; > > > > events { > > multi_accept on; > > worker_connections 16384; > > } > > > > http { > > include nginx.deny; > > include mime.types; > > default_type application/octet-stream; > > aio on; > > sendfile on; > > tcp_nopush on; > > gzip on; > > gzip_comp_level 6; > > gzip_min_length 1024; > > gzip_types > > application/javascript > > application/json > > application/xml > > image/svg+xml > > image/x-icon > > text/plain > > text/css > > text/xml > > ; > > lua_shared_dict dict 16k; > > log_format main $time_iso8601 > > ' srs="$status"' > > ' srt="$request_time"' > > ' crl="$request"' > > ' crh="$host"' > > ' cad="$remote_addr"' > > ' ssp="$server_port"' > > ' scs="$upstream_cache_status"' > > ' sua="$upstream_addr"' > > ' suc="$upstream_connect_time"' > > ' sut="$upstream_response_time"' > > ' sgz="$gzip_ratio"' > > ' sbs="$body_bytes_sent"' > > ' cau="$remote_user"' > > ' ccr="$connection_requests"' > > ' ccp="$pipe"' > > ' crs="$scheme"' > > ' crm="$request_method"' > > ' cru="$request_uri"' > > ' crp="$server_protocol"' > > ' chh="$http_host"' > > ' cha="$http_user_agent"' > > ' chr="$http_referer"' > > ' chf="$http_x_forwarded_for"' > > ; > > server_tokens off; > > reset_timedout_connection on; > > access_log /var/www/log/access.log main; > > > > fastcgi_cache main; > > fastcgi_cache_key $host:$server_port$uri; > > fastcgi_cache_methods GET HEAD; > > fastcgi_ignore_headers Cache-Control Expires; > > fastcgi_cache_path /tmp/nginx > > levels=2:2 > > keys_zone=main:4m > > inactive=24h > > ; > > ssl_certificate /etc/ssl/server.pem; > > ssl_certificate_key /etc/ssl/server.key; > > ssl_prefer_server_ciphers on; > > ssl_session_cache shared:SSL:4m; > > ssl_session_timeout 15m; > > > > upstream upstream { > > server unix:/tmp/php-fpm.sock; > > server 127.0.0.1:9000; > > server [::1]:9000; > > } > > > > map $http_origin $_origin { > > default *; > > '' ''; > > } > > > > server { > > listen 80; > > return 301 https://$host$request_uri; > > } > > > > server { > > listen 443 ssl http2; > > include nginx.filter; > > location / { > > set $_v1 ''; > > set $_v2 ''; > > set $_v3 ''; > > rewrite_by_lua_block { > > local dict = ngx.shared.dict > > local host = ngx.var.host > > local data = dict:get(host) > > if data == nil then > > local labels = {} > > for s in host:gmatch('[^.]+') do > > table.insert(labels, 1, > s) > > end > > data = labels[1] or '' > > local index = 2 > > while index <= #labels and #data > < > > 7 do > > data = data .. '/' .. > > labels[index] > > index = index + 1 > > end > > local f = '/usr/home/www/src/' .. > > data .. '/app.php' > > local _, _, code = os.rename(f, > f) > > if code == 2 then > > return ngx.exit(404) > > end > > if labels[index] == 'cdn' then > > data = data .. > > '|/tmp/www/cdn/' .. data > > else > > data = data .. > > '|/var/www/pub/' > > .. > > table.concat(labels, '/') .. '/-' > > end > > data = data .. '|' .. f > > dict:add(host, data) > > ngx.log(ngx.ERR, > > 'dict:add('..host..','..data..')') > > end > > local i = 1 > > for s in data:gmatch('[^|]+') do > > ngx.var["_v" .. i] = s > > i = i + 1 > > end > > } > > alias /; > > try_files > > $_v2$uri > > /var/www/pub/$_v1/!$uri > > /var/www/pub/!$uri > > @; > > add_header Access-Control-Allow-Origin $_origin; > > expires 28d; > > } > > location dir: { > > alias /; > > index :none; > > autoindex on; > > } > > location file: { > > alias /; > > } > > location @ { > > fastcgi_param DOCUMENT_ROOT $_v2; > > fastcgi_param SCRIPT_FILENAME $_v3; > > fastcgi_param SCRIPT_NAME > $fastcgi_script_name; > > fastcgi_param SERVER_PROTOCOL $server_protocol; > > fastcgi_param SERVER_ADDR $server_addr; > > fastcgi_param SERVER_PORT $server_port; > > fastcgi_param SERVER_NAME $host; > > fastcgi_param REMOTE_ADDR $remote_addr; > > fastcgi_param REMOTE_PORT $remote_port; > > fastcgi_param REQUEST_SCHEME $scheme; > > fastcgi_param REQUEST_METHOD $request_method; > > fastcgi_param REQUEST_URI $request_uri; > > fastcgi_param QUERY_STRING $query_string; > > fastcgi_param CONTENT_TYPE $content_type; > > fastcgi_param CONTENT_LENGTH $content_length; > > fastcgi_pass upstream; > > } > > } > > } > > > > Posted at Nginx Forum: > > https://forum.nginx.org/read.php?2,291674,291674#msg-291674 > > > > > > > > ------------------------------ > > > > Message: 2 > > Date: Thu, 27 May 2021 14:55:24 -0400 > > From: "Rai Mohammed" > > To: nginx at nginx.org > > Subject: How to do a large buffer size > 64k uWSGI requests with Nginx > > proxy | uwsgi request is too big with nginx > > Message-ID: > > < > > c303563b562a6243138081c03cc4c7ff.NginxMailingListEnglish at forum.nginx.org > > > > > > Content-Type: text/plain; charset=UTF-8 > > > > How to do a large buffer size > 64k uWSGI requests with Nginx proxy > > > > Deployment stack : > > Odoo ERP 12 > > Python 3.7.10 and Werkzeug 0.16.1 as backend > > Nginx proxy : 1.20.0 > > uWSGI : 2.0.19.1 > > OS : FreeBSD 13.0-RELEASE > > > > Nginx throw an alert from uwsgi of request is too big > > Alert : uwsgi request is too big: 81492, client: 10.29.79.250, server: > > odoo12ce-erp, request: "GET /web/webclient/..........." > > > > As you can see I increased the "uwsgi_buffer_size " in both uwsgi.ini and > > nginx.conf. > > > > Nginx config : > > `{ > > > > # increase the size of the buffers to handle odoo data > > # Activate uwsgi_buffering > > uwsgi_buffering on; > > uwsgi_buffers 16 128k; > > uwsgi_buffer_size 128k; > > uwsgi_busy_buffers_size 256k; > > # uwsgi_max_temp_file_size with zero value disables buffering of > > responses to temporary files > > uwsgi_max_temp_file_size 0; > > uwsgi_temp_file_write_size 256k; > > > > uwsgi_read_timeout 900s; > > uwsgi_connect_timeout 900s; > > uwsgi_send_timeout 900s; > > > > }` > > > > uwsgi.ini config : > > > > ` > > > > [uwsgi] > > strict = true > > pcre-jit = true > > #limit-as = 1024 > > #never-swap = true > > > > pidfile = /var/run/odoo_erp/odoo12ce_uwsgi.pid > > # safe-pidfile = /var/run/odoo_erp/odoo12ce.pid > > > > # Enable REUSE_PORT flag on socket to allow multiple instances > binding > > on the same address (BSD only). > > reuse-port = true > > > > # Testing with www or odoo12ce > > uid = odoo12ce > > gid = odoo12ce > > > > # To test and verification > > callable = application > > # To test and verification > > #module = odoo.service.wsgi_server:application > > > > # enable uwsgi master process > > master = true > > lazy = true > > lazy-apps=true > > > > # turn on memory usage report > > #memory-report=true > > > > enable-threads = true > > threads = 2 > > thunder-lock = true > > so-keepalive = true > > > > buffer-size = 262144 > > http-buffer-size = 262144 > > > > response-headers-limit = 262144 > > http-headers-timeout = 900 > > # set max connections to 1024 in uWSGI > > listen = 1024 > > > > so-send-timeout = 900 > > socket-send-timeout = 900 > > so-write-timeout = 900 > > socket-write-timeout = 900 > > > > http-timeout = 900 > > socket-timeout = 900 > > > > wsgi-accept-buffer = true > > wsgi-accept-buffers = true > > # clear environment on exit and Delete sockets during shutdown > > vacuum = true > > single-interpreter = true > > > > # Shutdown when receiving SIGTERM (default is respawn) > > die-on-term = true > > need-app = true > > > > # Disable built-in logging > > disable-logging = false > > > > # but log 4xx's and 5xx's anyway > > log-4xx = true > > log-5xx = true > > > > # full path to Odoo12ce project's root directory > > chdir = /odoo_erp/odoo12ce/odoo12ce_server > > #chdir2 = = /odoo_erp/odoo12ce/odoo12ce_server > > > > pythonpath = /odoo_erp/odoo12ce/odoo12ce_server > > > > # odoo12ce's wsgi file > > wsgi-file = > /odoo_erp/odoo12ce/odoo12ce_server/setup/odoo12ce-uwsgi.py > > > > #emperor = /odoo_erp/odoo12ce/vassals > > > > uwsgi-socket = 127.0.0.1:8070 > > uwsgi-socket = 127.0.0.1:8170 > > > > # daemonize uwsgi and write messages into given log > > daemonize = /var/log/odoo_erp/odoo12ce/odoo12ce_uwsgi_emperor.log > > > > # Restart workers after this many requests > > max-requests = 2000 > > > > # Restart workers after this many seconds > > max-worker-lifetime = 3600 > > > > # Restart workers after this much resident memory > > reload-on-rss = 2048 > > > > # How long to wait before forcefully killing workers > > worker-reload-mercy = 90 > > > > # Maximum number of workers allowed (cpu * 2) > > processes = 8 > > > > ` > > > > Posted at Nginx Forum: > > https://forum.nginx.org/read.php?2,291675,291675#msg-291675 > > > > > > > > ------------------------------ > > > > Message: 3 > > Date: Thu, 27 May 2021 22:26:29 +0300 > > From: "Valentin V. Bartenev" > > To: nginx at nginx.org > > Subject: Unit 1.24.0 release > > Message-ID: <4650105.31r3eYUQgx at vbart-laptop> > > Content-Type: text/plain; charset="UTF-8" > > > > Hi, > > > > I'm glad to announce a new release of NGINX Unit. > > > > This one is full of shiny new features. But before I dive into the > > details, > > let me introduce our new developers without whom this release wouldn't be > > so > > feature-rich. Please, welcome Zhidao Hong (???) and Ois?n Canty. > > > > Zhidao has already been contributing to various nginx open-source > projects > > for > > years as a community member, and I'm very excited to finally have him on > > board. > > > > Ois?n is a university student who's very interested in Unit; he joined > our > > dev > > team as an intern and already shown solid coding skills, curiosity, and > > attention to details, which is so important to our project. Good job! > > > > > > Now, back to the features. I'd like to highlight the first of our > > improvements > > in serving static media assets. > > > > :: MIME Type Filtering :: > > > > Now, you can restrict file serving by MIME type: > > > > { > > "share": "/www/data", > > "types": [ "image/*", "video/*" ] > > } > > > > The configuration above allows only files with various video and image > > extensions, but all other requests will return status code 403. > > > > In particular, this goes well with the "fallback" option that performs > > another > > action if the "share" returns a 40x error: > > > > { > > "share": "/www/data", > > "types": [ "!application/x-httpd-php" ], > > > > "fallback": { > > "pass": "applications/php" > > } > > } > > > > Here, all requests to existing files other than ".php" will be served as > > static > > content while the rest will be passed to a PHP application. > > > > More examples and documentation snippets are available here: > > > > - https://unit.nginx.org/configuration/#mime-filtering > > > > > > :: Chrooting and Path Restrictions When Serving Files :: > > > > As we take security seriously, now Unit introduces the ability to chroot > > not only its application processes but also the static files it serves on > > a per-request basis. Additionally, you can restrict traversal of > mounting > > points and symbolic link resolution: > > > > { > > "share": "/www/data/static/", > > "chroot": "/www/data/", > > "follow_symlinks": false, > > "traverse_mounts": false > > } > > > > See here for more information: > > > > - https://unit.nginx.org/configuration/#path-restrictions > > > > For details of Unit application process isolation abilities: > > > > - https://unit.nginx.org/configuration/#process-isolation > > > > > > Other notable features unrelated to static file serving: > > > > * Multiple WSGI/ASGI Python entry points per process > > > > It allows loading multiple modules or app entry points into a single > > Python > > process, choosing between them when handling requests with the full > > power of > > Unit's routes system. > > > > See here for Python's "targets" object description: > > > > - https://unit.nginx.org/configuration/#configuration-python-targets > > > > And here, more info about Unit's internal routing: > > > > - https://unit.nginx.org/configuration/#routes > > > > > > * Automatic overloading of "http" and "websocket" modules in Node.js > > > > Now you can run Node.js apps on Unit without touching their sources: > > > > - https://unit.nginx.org/configuration/#node-js > > > > > > * Applying OpenSSL configuration commands > > > > Finally, you can control various TLS settings via OpenSSL's generic > > configuration interface with all the dynamic power of Unit: > > > > - https://unit.nginx.org/configuration/#ssl-tls-configuration > > > > > > The full changelog for the release: > > > > Changes with Unit 1.24.0 27 May > > 2021 > > > > *) Change: PHP added to the default MIME type list. > > > > *) Feature: arbitrary configuration of TLS connections via OpenSSL > > commands. > > > > *) Feature: the ability to limit static file serving by MIME types. > > > > *) Feature: support for chrooting, rejecting symlinks, and rejecting > > mount point traversal on a per-request basis when serving static > > files. > > > > *) Feature: a loader for automatically overriding the "http" and > > "websocket" modules in Node.js. > > > > *) Feature: multiple "targets" in Python applications. > > > > *) Feature: compatibility with Ruby 3.0. > > > > *) Bugfix: the router process could crash while closing a TLS > > connection. > > > > *) Bugfix: a segmentation fault might have occurred in the PHP module > > if > > fastcgi_finish_request() was used with the "auto_globals_jit" > option > > enabled. > > > > > > That's all for today, but even more exciting features are poised for the > > upcoming releases: > > > > - statistics API > > - process control API > > - variables from regexp captures in the "match" object > > - simple request rewrites using variables > > - variables support in static file serving options > > - ability to override client IP from the X-Forwarded-For header > > - TLS sessions cache and tickets > > > > Also, please check our GitHub to follow the development and discuss new > > features: > > > > - https://github.com/nginx/unit > > > > Stay tuned! > > > > wbr, Valentin V. Bartenev > > > > > > > > > > > > ------------------------------ > > > > Message: 4 > > Date: Fri, 28 May 2021 02:53:53 +0300 > > From: Maxim Dounin > > To: nginx at nginx.org > > Subject: Re: How to do a large buffer size > 64k uWSGI requests with > > Nginx proxy | uwsgi request is too big with nginx > > Message-ID: > > Content-Type: text/plain; charset=us-ascii > > > > Hello! > > > > On Thu, May 27, 2021 at 02:55:24PM -0400, Rai Mohammed wrote: > > > > > How to do a large buffer size > 64k uWSGI requests with Nginx proxy > > > > > > Deployment stack : > > > Odoo ERP 12 > > > Python 3.7.10 and Werkzeug 0.16.1 as backend > > > Nginx proxy : 1.20.0 > > > uWSGI : 2.0.19.1 > > > OS : FreeBSD 13.0-RELEASE > > > > > > Nginx throw an alert from uwsgi of request is too big > > > Alert : uwsgi request is too big: 81492, client: 10.29.79.250, server: > > > odoo12ce-erp, request: "GET /web/webclient/..........." > > > > > > As you can see I increased the "uwsgi_buffer_size " in both uwsgi.ini > and > > > nginx.conf. > > > > The uwsgi protocol uses 16-bit datasize field[1], and this limits > > maximum size of all headers in a request to uwsgi backends. The > > error message from nginx suggests you are hitting this limit. > > Unfortunately, using larger buffers won't help here. > > > > In most cases such a huge request headers indicate that there is a > > bug somewhere. For example, nginx by default limits total size of > > request headers to 32k (see [2]). Similar 64k limit also exists > > in FastCGI (though with protocol clearly defining how to provide > > additional data if needed, just not implemented in nginx), and the > > only case when it was questioned was due to a miscoded client (see > > [3]). > > > > If nevertheless such a huge request headers are intentional, the > > most simple solution probably would be to switch to a different > > protocol, such as HTTP. > > > > [1] https://uwsgi-docs.readthedocs.io/en/latest/Protocol.html > > [2] http://nginx.org/r/large_client_header_buffers > > [3] https://trac.nginx.org/nginx/ticket/239 > > > > -- > > Maxim Dounin > > http://mdounin.ru/ > > > > > > ------------------------------ > > > > Message: 5 > > Date: Thu, 27 May 2021 21:59:13 -0400 > > From: "Rai Mohammed" > > To: nginx at nginx.org > > Subject: Re: How to do a large buffer size > 64k uWSGI requests with > > Nginx proxy | uwsgi request is too big with nginx > > Message-ID: > > < > > 06b5de642b1389c66bdbe8e193824603.NginxMailingListEnglish at forum.nginx.org > > > > > > Content-Type: text/plain; charset=UTF-8 > > > > Hello, > > Yes I have searched the request that generates this big size header, and > > it's a Get URI > > pulling all the features installed and requested by the user of the ERP. > > > > Before I integrate the uWSGI layer, the stack deployment with Nginx KTLS > > HTTP2 works perfectly and there's > > no problem of buffer sizing. > > The reason why I added the uWSGI layer, is for using the uwsgi-socket > > binary > > protocol, and it work fine and very fast > > decreasing the time load for the principal web page. > > So for now I have to switch to the http-socket protocol and configuring > > HTTP2 in uWSGI. > > I hope in the future Nginx will allow using the huge headers sizing. > > > > Thanks for your reply and clarifications. > > > > Posted at Nginx Forum: > > https://forum.nginx.org/read.php?2,291675,291680#msg-291680 > > > > > > > > ------------------------------ > > > > Message: 6 > > Date: Fri, 28 May 2021 09:00:00 +0100 > > From: Francis Daly > > To: nginx at nginx.org > > Subject: Re: Help: Using Nginx Reverse Proxy bypass traffic in to a > > application running in a container > > Message-ID: <20210528080000.GC11167 at daoine.org> > > Content-Type: text/plain; charset=us-ascii > > > > On Tue, May 25, 2021 at 09:47:47PM +0530, Amila Gunathilaka wrote: > > > > Hi there, > > > > > I'm sorry for taking time to reply to this, you were so keen about my > > > problem. Thank you. > > > > No worries at all -- the mailing list is not an immediate-response > medium. > > > > > Actually my problem was when sending *response *to the load balancer > from > > > the nginx ( not the request, it should be corrected as the *response > *in > > my > > > previous email). > > > Such as my external load balancer is always doing a health check for my > > > nginx port (80) , below is the *response *message in the > > > /var/log/nginx/access.log against the health check request coming > from > > > the external-loadbalancer. > > > > As I understand it, the load balancer is making the request "OPTIONS /" > > to nginx, and nginx is responding with a http 405, and you don't want > > nginx to do that. > > > > What response do you want nginx to give to the request? > > > > Your config make it look like nginx is told to proxy_pass the OPTIONS > > request to your port 9091 server, so I presume that your port 9091 server > > is responding 405 to the OPTIONS request and nginx is passing the > response > > from the 9091-upstream to the load-balancer client. > > > > Your port 9091 logs or traffic analysis should show that that is the > case. > > > > If is the case, you *could* fix it by telling your 9091-upstream to > respond > > how you want it to to the "OPTIONS /" request (using its config); or > > you could configure nginx to intercept the request and handle it itself, > > without proxy_pass'ing it > > > > > > The first case would mean that the "health check" is actually testing > > the full nginx-to-upstream chain; the second would have it only testing > > that nginx is responding. > > > > If you decide that you want nginx to handle this request itself, and to > > respond with a http 204, you could add something like > > > > if ($request_method = "OPTIONS") { return 204; } > > > > inside the "location /" block. > > > > (Strictly: that would tell nginx to handle all "OPTIONS /anything" > > requests, not just "OPTIONS /".) > > > > You would not need the error_page directives that you show. > > > > > > You could instead add a new "location = /" block, and do the OPTIONS > > check there; but you would probably also have to duplicate the three > > other lines from the "location /" block -- sometimes people prefer > > "tidy-looking" configuration over "correctness and probable machine > > efficiency". Pick which you like; if you do not measure a difference, > > there is not a difference that you care about. > > > > That is, you want either one location: > > > > > server { > > > listen 80; > > > server_name 172.25.234.105; > > > > > location / { > > > > if ($request_method = "OPTIONS") { return 204; } > > > > > proxy_pass http://127.0.0.1:9091; > > > auth_basic "PROMETHEUS PUSHGATEWAY Login Area"; > > > auth_basic_user_file /etc/nginx/.htpasswd; > > > } > > > } > > > > or two locations: > > > > location = / { > > if ($request_method = "OPTIONS") { return 204; } > > proxy_pass http://127.0.0.1:9091; > > auth_basic "PROMETHEUS PUSHGATEWAY Login Area"; > > auth_basic_user_file /etc/nginx/.htpasswd; > > } > > > > location / { > > proxy_pass http://127.0.0.1:9091; > > auth_basic "PROMETHEUS PUSHGATEWAY Login Area"; > > auth_basic_user_file /etc/nginx/.htpasswd; > > } > > > > (and, if you use the two, you could potentially move the "auth_basic" > > and "auth_basic_user_file" outside the "location", to be directly within > > "server"; that does depend on what else is in your config file.) > > > > If you want something else in the response to the OPTIONS request, > > you can change the "return" response code, or "add_header" and the like. > > > > Good luck with it, > > > > f > > -- > > Francis Daly francis at daoine.org > > > > > > ------------------------------ > > > > Subject: Digest Footer > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > ------------------------------ > > > > End of nginx Digest, Vol 139, Issue 28 > > ************************************** > > > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: < > http://mailman.nginx.org/pipermail/nginx/attachments/20210529/398430ff/attachment.htm > > > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > ------------------------------ > > End of nginx Digest, Vol 139, Issue 29 > ************************************** > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 138237 bytes Desc: not available URL: