From felipeapolanco at gmail.com Tue Nov 7 21:16:38 2023 From: felipeapolanco at gmail.com (Felipe Polanco) Date: Tue, 7 Nov 2023 17:16:38 -0400 Subject: Question regarding stale sockets and EWOULDBLOCK Message-ID: Hello, We have a situation with NGINX forward proxy and I would like the community's input on how to handle this. We run long-lived TCP sockets in our business, and we authorize transaction requests on behalf of our customers. We use NGINX as the TLS termination proxy. We are experiencing issues when our customer's internal network goes down but the link stays up, we send transactions to them but nothing comes back (no TCP ACK). We see TCP retransmissions at this point. At this point the NGINX send buffer starts filling up, until it reaches the buffer maximum size but the socket is not dropped, we monitor it using netstat output. NGINX just does nothing nor logs any error when this happens, this scenario is called EWOULDBLOCK socket error, we handle this in our application but when we introduced NGINX we are now timing out transactions. We cannot use proxy_timeout because these are long-lived sockets, some stay up for months waiting for authorization requests. Is there any option in NGINX that we can use to close the maxed out socket? Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: From badouglas at gmail.com Sun Nov 12 05:15:21 2023 From: badouglas at gmail.com (bruce) Date: Sun, 12 Nov 2023 00:15:21 -0500 Subject: test Message-ID: test to make sure i'm good.. From robertodmaggi at gmail.com Tue Nov 14 13:51:05 2023 From: robertodmaggi at gmail.com (Roberto D. Maggi) Date: Tue, 14 Nov 2023 14:51:05 +0100 Subject: location ~* \.(...) access_log off; prevents access to files instead of logs Message-ID: <5dc70b1f-3e95-4042-8479-65e4d1f4bf83@gmail.com> Hi you all, I'm having a problem with these two stanzas, in writing down a virtual host and can't figure out what's wrong with them. They look correct but the first doesn't simply work and the second blocks --> here I'm trying to add this header only to cgi|shtml|phtml|php extensions location ~* \.(?:cgi|shtml|phtml|php)$ {       add_header Cache-Control "public";       client_max_body_size 0;       chunked_transfer_encoding on;       } --> here I don't want to log accesses to to woff|woff2|ico|pdf|flv|jpg|jpeg|png|gif|js|css|gz|swf|txt files location ~* \.(?:woff|woff2|ico|pdf|flv|jpg|jpeg|png|gif|js|css|gz|swf|txt)$ {       access_log off;       } Does anybody can guess what's wrong with them? Thanks in advance. Rob From mdounin at mdounin.ru Tue Nov 14 15:52:14 2023 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 14 Nov 2023 18:52:14 +0300 Subject: location ~* \.(...) access_log off; prevents access to files instead of logs In-Reply-To: <5dc70b1f-3e95-4042-8479-65e4d1f4bf83@gmail.com> References: <5dc70b1f-3e95-4042-8479-65e4d1f4bf83@gmail.com> Message-ID: Hello! On Tue, Nov 14, 2023 at 02:51:05PM +0100, Roberto D. Maggi wrote: > Hi you all, > I'm having a problem with these two stanzas, in writing down a virtual > host and can't figure out what's wrong with them. > They look correct but the first doesn't simply work and the second blocks > > --> here I'm trying to add this header only to cgi|shtml|phtml|php > extensions > > location ~* \.(?:cgi|shtml|phtml|php)$ { >       add_header Cache-Control "public"; >       client_max_body_size 0; >       chunked_transfer_encoding on; >       } > > --> here I don't want to log accesses to to > woff|woff2|ico|pdf|flv|jpg|jpeg|png|gif|js|css|gz|swf|txt files > location ~* > \.(?:woff|woff2|ico|pdf|flv|jpg|jpeg|png|gif|js|css|gz|swf|txt)$ { >       access_log off; >       } > > > Does anybody can guess what's wrong with them? > Thanks in advance. When processing a particular request, nginx selects a location and handles a request according to the configuration in this location, see http://nginx.org/r/location. As such, the first location, which tries to alter processing of php files, does not seem to be correct: in particular, it lacks any fastcgi_pass / proxy_pass directives, and hence such files will be simply returned to the client as static files. While it might be what you indeed tried to setup, the "doesn't simply work" suggests it isn't. You may want to re-consider your configuration to ensure that requests to php files are properly proxied to appropriate backend servers. The second location, which disables logging to some static files, looks correct, but it might have the same problem: as long as requests are handled in this location, some essential handling which was previously present might be missing, and this breaks things. For example, a common error is to write something like this: location / { root /path/to/site; } location ~ \.css$ { # no root here } Note that the root path is configured in "location /", but not in "location ~ \.css$", hence all css files will use some default root path (or the one inherited from previous configuration levels), which is likely incorrect. An obvious fix would be to configure root at the server level instead, so it will be used for both locations. Just in case, looking into error log usually makes such issues trivial to identify - nginx will complain if it cannot find a file requested, and will show full path it tried to use. Hope this helps. -- Maxim Dounin http://mdounin.ru/ From raman.meenakshisundaram at insigniafinancial.com.au Wed Nov 15 23:43:39 2023 From: raman.meenakshisundaram at insigniafinancial.com.au (Raman Meenakshisundaram) Date: Wed, 15 Nov 2023 23:43:39 +0000 Subject: nginx is redirecting to wrong server context Message-ID: Hi I am trying to download a docker image through nginx, and found that it is always redirecting to the first server configured in the nginx.conf file. I am doing a podman pull "podman pull --tls-verify=false mcr.itt.aws.orpd.com.au/devcontainers/python:dev-3.9-buster" but it is wrongly going to docker-alice.itt.aws.oprd.com.au We have setup route53 record in AWS already. Below is the nginx.conf file content: ---------------------------------------------------------------------------------------------------------------------------------------- For more information on configuration, see: # * Official English Documentation: http://nginx.org/en/docs/ # * Official Russian Documentation: http://nginx.org/ru/docs/ user nginx; #worker_processes auto; worker_processes 4; worker_rlimit_nofile 4096; error_log /var/log/nginx/error.log; pid /run/nginx.pid; # Load dynamic modules. See /usr/share/doc/nginx/README.dynamic. include /usr/share/nginx/modules/*.conf; events { worker_connections 4096; } http { proxy_send_timeout 120; proxy_read_timeout 300; proxy_connect_timeout 300; proxy_buffering off; proxy_request_buffering off; # allow large uploads of files client_max_body_size 1G; keepalive_timeout 5 5; tcp_nodelay on; map $upstream_http_docker_distribution_api_version $docker_distribution_api_version { '' 'registry/2.0'; } server { listen 443 ssl; listen 80; server_name docker-alice.itt.aws.oprd.com.au; ssl_certificate /etc/nginx/ssl/selfsigned_wildcard_san_cert.crt; ssl_certificate_key /etc/nginx/ssl/privatekey_selfsigned_wildcard_san.pem; # Docker /v2 and /v1 (for search) requests resolver 10.78.128.2:53 valid=300s ipv6=off; resolver_timeout 10s; location /v2 { proxy_set_header Host $host:$server_port; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto "https"; set $backend "nexus.itt.aws.oprd.com.au"; proxy_pass https://$backend/repository/proxy-to-nonprod-hosted$request_uri; #proxy_pass https://nexus.itt.aws.oprd.com.au/repository/proxy-to-nonprod-hosted/$request_uri; } location /v1 { proxy_set_header Host $host:$server_port; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto "https"; set $backend "nexus.itt.aws.orpd.com.au"; proxy_pass https://$backend/repository/proxy-to-nonprod-hosted$request_uri; #proxy_pass https://nexus.itt.aws.oprd.com.au/repository/proxy-to-nonprod-hosted/$request_uri; } location / { proxy_set_header Host $host:$server_port; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto "https"; set $backend "nexus.itt.aws.oprd.com.au"; proxy_pass https://$backend/; #proxy_pass https://nexus.itt.aws.oprd.com.au/; } } server { listen 443 ssl; listen 80; server_name mcr.itt.aws.oprd.com.au; ssl_certificate /etc/nginx/ssl/selfsigned_wildcard_san_cert.crt; ssl_certificate_key /etc/nginx/ssl/privatekey_selfsigned_wildcard_san.pem; # Docker /v2 and /v1 (for search) requests resolver 10.78.128.2:53 valid=300s ipv6=off; resolver_timeout 10s; location /v2 { proxy_set_header Host $host:$server_port; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto "https"; set $backend "nexus.itt.aws.oprd.com.au"; proxy_pass https://$backend/repository/mcr-proxy$request_uri; } location /v1 { proxy_set_header Host $host:$server_port; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto "https"; set $backend "nexus.itt.aws.orpd.com.au"; proxy_pass https://$backend/repository/mcr-proxy$request_uri; } location / { proxy_set_header Host $host:$server_port; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto "https"; set $backend "nexus.itt.aws.oprd.com.au"; proxy_pass https://$backend/; #proxy_pass https://nexus.itt.aws.oprd.com.au/; } } } ******************************************************************************************* We acknowledge the traditional custodians of the land on which we meet, work and live. We pay our respects to the ancestors and Elders, past and present. The information in this email and any attachments may contain confidential, privileged or copyright material belonging to us, related entities or third parties. If you are not the intended recipient you are prohibited from disclosing this information. If you have received this email in error, please contact the sender immediately by return email or phone and delete it. We apologise for any inconvenience caused. We use security software but do not guarantee this email is free from viruses. You assume responsibility for any consequences arising from the use of this email. This email may contain personal views of the sender not authorised by us. ******************************************************************************************* -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeremy at jeremy.cx Thu Nov 16 01:26:11 2023 From: jeremy at jeremy.cx (Jeremy Cocks) Date: Thu, 16 Nov 2023 01:26:11 +0000 Subject: nginx is redirecting to wrong server context In-Reply-To: References: Message-ID: Hello > and found that it is always redirecting to the first server configured in the nginx.conf file. This is expected behaviour when you have not defined a default_server or you are not sending the appropriate host header in your request (you are not confirming how things are set in the http client you are using). The default behaviour is defined here: https://nginx.org/en/docs/http/request_processing.html > In this configuration nginx tests only the request’s header field “Host” to determine which server the request should be routed to. If its value does not match any server name, or the request does not contain this header field at all, then nginx will route the request to the default server for this port. In the configuration above, the default server is the first one — which is nginx’s standard default behaviour. It can also be set explicitly which server should be default, with the default_server parameter in the listen directive. I am assuming you want the default to be: mcr.itt.aws.oprd.com.au thus change the listen parameters on its server block: server { listen 443 ssl default_server; listen 80 default_server; server_name mcr.itt.aws.oprd.com.au; … } Cheers J On Wed, 15 Nov 2023 at 23:44, Raman Meenakshisundaram via nginx < nginx at nginx.org> wrote: > Hi > > I am trying to download a docker image through nginx, and found that it is > always redirecting to the first server configured in the nginx.conf file. > > > > I am doing a podman pull "podman pull --tls-verify=false > mcr.itt.aws.orpd.com.au/devcontainers/python:dev-3.9-buster" but it is > wrongly going to docker-alice.itt.aws.oprd.com.au > > > > We have setup route53 record in AWS already. > > > > Below is the nginx.conf file content: > > > ---------------------------------------------------------------------------------------------------------------------------------------- > > > > For more information on configuration, see: > > # * Official English Documentation: http://nginx.org/en/docs/ > > # * Official Russian Documentation: http://nginx.org/ru/docs/ > > > > user nginx; > > #worker_processes auto; > > worker_processes 4; > > worker_rlimit_nofile 4096; > > error_log /var/log/nginx/error.log; > > pid /run/nginx.pid; > > > > # Load dynamic modules. See /usr/share/doc/nginx/README.dynamic. > > include /usr/share/nginx/modules/*.conf; > > > > events { > > worker_connections 4096; > > } > > > > http { > > > > proxy_send_timeout 120; > > proxy_read_timeout 300; > > proxy_connect_timeout 300; > > proxy_buffering off; > > proxy_request_buffering off; > > # allow large uploads of files > > client_max_body_size 1G; > > keepalive_timeout 5 5; > > tcp_nodelay on; > > > > map $upstream_http_docker_distribution_api_version > $docker_distribution_api_version { > > '' 'registry/2.0'; > > } > > > > server { > > listen 443 ssl; > > listen 80; > > server_name docker-alice.itt.aws.oprd.com.au; > > > > ssl_certificate /etc/nginx/ssl/selfsigned_wildcard_san_cert.crt; > > ssl_certificate_key > /etc/nginx/ssl/privatekey_selfsigned_wildcard_san.pem; > > > > # Docker /v2 and /v1 (for search) requests > > resolver 10.78.128.2:53 valid=300s ipv6=off; > > resolver_timeout 10s; > > > > location /v2 { > > proxy_set_header Host $host:$server_port; > > proxy_set_header X-Real-IP $remote_addr; > > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > > proxy_set_header X-Forwarded-Proto "https"; > > set $backend "nexus.itt.aws.oprd.com.au"; > > proxy_pass > https://$backend/repository/proxy-to-nonprod-hosted$request_uri; > > #proxy_pass > https://nexus.itt.aws.oprd.com.au/repository/proxy-to-nonprod-hosted/$request_uri > ; > > } > > location /v1 { > > proxy_set_header Host $host:$server_port; > > proxy_set_header X-Real-IP $remote_addr; > > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > > proxy_set_header X-Forwarded-Proto "https"; > > set $backend "nexus.itt.aws.orpd.com.au"; > > proxy_pass > https://$backend/repository/proxy-to-nonprod-hosted$request_uri; > > #proxy_pass > https://nexus.itt.aws.oprd.com.au/repository/proxy-to-nonprod-hosted/$request_uri > ; > > } > > location / { > > proxy_set_header Host $host:$server_port; > > proxy_set_header X-Real-IP $remote_addr; > > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > > proxy_set_header X-Forwarded-Proto "https"; > > set $backend "nexus.itt.aws.oprd.com.au"; > > proxy_pass https://$backend/; > > #proxy_pass https://nexus.itt.aws.oprd.com.au/; > > } > > } > > server { > > listen 443 ssl; > > listen 80; > > server_name mcr.itt.aws.oprd.com.au; > > > > ssl_certificate /etc/nginx/ssl/selfsigned_wildcard_san_cert.crt; > > ssl_certificate_key > /etc/nginx/ssl/privatekey_selfsigned_wildcard_san.pem; > > > > # Docker /v2 and /v1 (for search) requests > > resolver 10.78.128.2:53 valid=300s ipv6=off; > > resolver_timeout 10s; > > > > location /v2 { > > proxy_set_header Host $host:$server_port; > > proxy_set_header X-Real-IP $remote_addr; > > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > > proxy_set_header X-Forwarded-Proto "https"; > > set $backend "nexus.itt.aws.oprd.com.au"; > > proxy_pass https://$backend/repository/mcr-proxy$request_uri; > > } > > location /v1 { > > proxy_set_header Host $host:$server_port; > > proxy_set_header X-Real-IP $remote_addr; > > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > > proxy_set_header X-Forwarded-Proto "https"; > > set $backend "nexus.itt.aws.orpd.com.au"; > > proxy_pass https://$backend/repository/mcr-proxy$request_uri; > > } > > location / { > > proxy_set_header Host $host:$server_port; > > proxy_set_header X-Real-IP $remote_addr; > > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > > proxy_set_header X-Forwarded-Proto "https"; > > set $backend "nexus.itt.aws.oprd.com.au"; > > proxy_pass https://$backend/; > > #proxy_pass https://nexus.itt.aws.oprd.com.au/; > > } > > } > > } > > ********************************************************************* > We acknowledge the traditional custodians of the land on which we meet, > work > and live. We pay our respects to the ancestors and Elders, past and > present. > > The information in this email and any attachments may contain > confidential, privileged > or copyright material belonging to us, related entities or third parties. > If you are not > the intended recipient you are prohibited from disclosing this > information. If you > have received this email in error, please contact the sender immediately > by return > email or phone and delete it. We apologise for any inconvenience caused. > We use > security software but do not guarantee this email is free from viruses. > You assume > responsibility for any consequences arising from the use of this email. > This email > may contain personal views of the sender not authorised by us. > ********************************************************************* > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From public1020 at proton.me Fri Nov 17 03:57:23 2023 From: public1020 at proton.me (public1020) Date: Fri, 17 Nov 2023 03:57:23 +0000 Subject: control proxy_buffering with variables Message-ID: I'm trying to control buffering with variables, but nginx complains about it, nginx: [emerg] invalid value "$val" in "proxy_request_buffering" directive, it must be "on" or "off" in /etc/nginx/sites-enabled/test.conf:9 Is there any way to resolve this? Attached the configuration in question. server { listen 8888; set $val on; if ($request_uri ~* "enable") { set $val off; } proxy_request_buffering $val; proxy_buffering $val; location / { proxy_pass http://127.0.0.1:3333; }} -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Nov 17 20:30:27 2023 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 17 Nov 2023 23:30:27 +0300 Subject: control proxy_buffering with variables In-Reply-To: References: Message-ID: Hello! On Fri, Nov 17, 2023 at 03:57:23AM +0000, public1020 via nginx wrote: > I'm trying to control buffering with variables, but nginx complains about it, > > nginx: [emerg] invalid value "$val" in "proxy_request_buffering" directive, it must be "on" or "off" in /etc/nginx/sites-enabled/test.conf:9 > > Is there any way to resolve this? Attached the configuration in question. Much like most of the nginx configuration directives, "proxy_request_buffering" does not support variables. Note that if variables are supported by a particular directive, this is explicitly documented in the directive description at nginx.org. If you want to use different buffering options for different requests, consider using distinct locations instead. Something like location / { proxy_pass http://127.0.0.1:3333; } location ~* enable { proxy_pass http://127.0.0.1:3333; proxy_request_buffering off; proxy_buffering off; } would be close to the configuration you've tried to use, and mostly arbitrary conditions, including the exact equivalent to your configuration, can be implemented using internal redirections, such as with "rewrite". Note well that the proxy_buffering can also be controlled from the backend via the X-Accel-Buffering response header. -- Maxim Dounin http://mdounin.ru/ From l2dy at aosc.io Sat Nov 18 06:44:20 2023 From: l2dy at aosc.io (Zero King) Date: Sat, 18 Nov 2023 14:44:20 +0800 Subject: Limiting number of client TLS connections Message-ID: <489b61d6-d20e-429f-be71-cdc704d983a9@aosc.io> Hi all, I want Nginx to limit the rate of new TLS connections and the total (or per-worker) number of all client-facing connections, so that under a sudden surge of requests, existing connections can get enough share of CPU to be served properly, while excessive connections are rejected and retried against other servers in the cluster. I am running Nginx on a managed Kubernetes cluster, so tuning kernel parameters or configuring layer 4 firewall is not an option. To serve existing connections well, worker_connections can not be used, because it also affects connections with proxied servers. Is there a way to implement these measures in Nginx configuration? From markbsdmail2023 at gmail.com Sat Nov 18 10:54:21 2023 From: markbsdmail2023 at gmail.com (Mark) Date: Sat, 18 Nov 2023 13:54:21 +0300 Subject: Nginx as reverse proxy - proxy_ssl_x questions Message-ID: Hello there. Having a proxy directive like; location / { proxy_pass http://10.10.10.4:4020; ... I wonder when using proxy_pass http://... (not httpS), are these directives effective, under the proxy_pass? proxy_ssl_name $host; proxy_ssl_server_name on; proxy_ssl_session_reuse off; Or they would work ONLY if proxy_pass is pointed to an "https://"? Best wishes, Regards. Mark. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sun Nov 19 00:05:28 2023 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 19 Nov 2023 03:05:28 +0300 Subject: Nginx as reverse proxy - proxy_ssl_x questions In-Reply-To: References: Message-ID: Hello! On Sat, Nov 18, 2023 at 01:54:21PM +0300, Mark wrote: > Hello there. > > Having a proxy directive like; > > location / { > proxy_pass http://10.10.10.4:4020; > ... > > I wonder when using proxy_pass http://... (not httpS), > are these directives effective, under the proxy_pass? > > proxy_ssl_name $host; > proxy_ssl_server_name on; > proxy_ssl_session_reuse off; > > Or they would work ONLY if proxy_pass is pointed to an "https://"? The "proxy_ssl_*" directives define configuration for SSL proxying. That is, corresponding values are only used when proxy_pass is used with the "https" scheme. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Sun Nov 19 00:11:05 2023 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 19 Nov 2023 03:11:05 +0300 Subject: Limiting number of client TLS connections In-Reply-To: <489b61d6-d20e-429f-be71-cdc704d983a9@aosc.io> References: <489b61d6-d20e-429f-be71-cdc704d983a9@aosc.io> Message-ID: Hello! On Sat, Nov 18, 2023 at 02:44:20PM +0800, Zero King wrote: > I want Nginx to limit the rate of new TLS connections and the total (or > per-worker) number of all client-facing connections, so that under a > sudden surge of requests, existing connections can get enough share of > CPU to be served properly, while excessive connections are rejected and > retried against other servers in the cluster. > > I am running Nginx on a managed Kubernetes cluster, so tuning kernel > parameters or configuring layer 4 firewall is not an option. > > To serve existing connections well, worker_connections can not be used, > because it also affects connections with proxied servers. > > Is there a way to implement these measures in Nginx configuration? No, nginx does not provide a way to limit rate of new connections and/or total number of established connections. Instead, firewall is expected to be used for such tasks. -- Maxim Dounin http://mdounin.ru/ From markbsdmail2023 at gmail.com Sun Nov 19 09:41:11 2023 From: markbsdmail2023 at gmail.com (Mark) Date: Sun, 19 Nov 2023 12:41:11 +0300 Subject: Nginx as reverse proxy - proxy_ssl_x questions In-Reply-To: References: Message-ID: Hello Mr. Maxim, thank you very much for your reply. Things are much clearer now, thanks! One, last question; I have implemented nginx as a reverse proxy with TLS termination in my FreeBSD host machine, and another nginx instance running in my jail, in; 10.10.10.2. So, the host machine does the reverse proxying and SSL. Before I open my website to public and production (a Wordpress website), could you please kindly have a look at my reverse proxy configuration here; http://paste.nginx.org/b8 So that you might wish to add some suggestions, or perhaps I still have a misconfigured/unneeded directive there? Thanks once again, Regards. Mark. Maxim Dounin , 19 Kas 2023 Paz, 03:05 tarihinde şunu yazdı: > Hello! > > On Sat, Nov 18, 2023 at 01:54:21PM +0300, Mark wrote: > > > Hello there. > > > > Having a proxy directive like; > > > > location / { > > proxy_pass http://10.10.10.4:4020; > > ... > > > > I wonder when using proxy_pass http://... (not httpS), > > are these directives effective, under the proxy_pass? > > > > proxy_ssl_name $host; > > proxy_ssl_server_name on; > > proxy_ssl_session_reuse off; > > > > Or they would work ONLY if proxy_pass is pointed to an "https://"? > > The "proxy_ssl_*" directives define configuration for SSL > proxying. That is, corresponding values are only used when > proxy_pass is used with the "https" scheme. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Sun Nov 19 21:02:31 2023 From: r at roze.lv (Reinis Rozitis) Date: Sun, 19 Nov 2023 23:02:31 +0200 Subject: Limiting number of client TLS connections In-Reply-To: <489b61d6-d20e-429f-be71-cdc704d983a9@aosc.io> References: <489b61d6-d20e-429f-be71-cdc704d983a9@aosc.io> Message-ID: <006c01da1b2b$b673ff90$235bfeb0$@roze.lv> > sudden surge of requests, existing connections can get enough share of CPU to be served properly, while excessive connections are rejected While you can't limit the connections (before the TLS handshake) there is a module to limit the requests per client/ip https://nginx.org/en/docs/http/ngx_http_limit_req_module.html (and with limit_req_status 444; you can effectively close the connection without returning any response). rr From mdounin at mdounin.ru Mon Nov 20 01:51:19 2023 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 20 Nov 2023 04:51:19 +0300 Subject: Nginx as reverse proxy - proxy_ssl_x questions In-Reply-To: References: Message-ID: Hello! On Sun, Nov 19, 2023 at 12:41:11PM +0300, Mark wrote: > Hello Mr. Maxim, thank you very much for your reply. > > Things are much clearer now, thanks! > > One, last question; > > I have implemented nginx as a reverse proxy with TLS termination in my > FreeBSD host machine, and another nginx instance running in my jail, in; > 10.10.10.2. > > So, the host machine does the reverse proxying and SSL. > > Before I open my website to public and production (a Wordpress website), > could you please kindly have a look at my reverse proxy configuration here; > > http://paste.nginx.org/b8 > > So that you might wish to add some suggestions, or perhaps I still have a > misconfigured/unneeded directive there? Here are some comments: > proxy_cache_bypass $http_upgrade; You don't need proxy_cache_bypass if you aren't using cache. > proxy_buffering off; I don't really recommend switching off buffering unless you have reasons to. And if the reason is to avoid disk buffering, consider "proxy_max_temp_file_size 0;" instead, see http://nginx.org/r/proxy_max_temp_file_size for details. > proxy_set_header Referer $scheme://$host; This looks simply wrong. > proxy_set_header X-Scheme https; > proxy_set_header X-Forwarded-Proto https; > proxy_set_header X-Scheme https; > proxy_set_header X-Forwarded-Ssl on; This looks a bit too many of custom headers to let backend know that https is being used. > proxy_set_header Upgrade $http_upgrade; > proxy_set_header Connection "upgrade"; This shouldn't be used unless you intentionally configuring WebSocket proxying. > proxy_set_header Early-Data $ssl_early_data; This is certainly not needed unless you are using TLSv1.3 Early Data (http://nginx.org/r/ssl_early_data), and you aren't. Hope this helps. -- Maxim Dounin http://mdounin.ru/ From jordanc.carter at outlook.com Mon Nov 20 02:33:08 2023 From: jordanc.carter at outlook.com (J Carter) Date: Mon, 20 Nov 2023 02:33:08 +0000 Subject: Limiting number of client TLS connections In-Reply-To: <489b61d6-d20e-429f-be71-cdc704d983a9@aosc.io> References: <489b61d6-d20e-429f-be71-cdc704d983a9@aosc.io> Message-ID: Hello, A self contained solution would be to double proxy, first through nginx stream server and then locally back to nginx http server (with proxy_pass via unix socket, or to localhost on a different port). You can implement your own custom rate limiting logic in the stream server with NJS (js_access) and use the new js_shared_dict_zone (which is shared between workers) for persistently storing rate calculations. You'd have additional overhead from the stream tcp proxy and the njs, but it shouldn't be too great (at least compared to overhead of TLS handshakes). Regards, Jordan Carter. ________________________________________ From: nginx on behalf of Zero King Sent: Saturday, November 18, 2023 6:44 AM To: nginx at nginx.org Subject: Limiting number of client TLS connections Hi all, I want Nginx to limit the rate of new TLS connections and the total (or per-worker) number of all client-facing connections, so that under a sudden surge of requests, existing connections can get enough share of CPU to be served properly, while excessive connections are rejected and retried against other servers in the cluster. I am running Nginx on a managed Kubernetes cluster, so tuning kernel parameters or configuring layer 4 firewall is not an option. To serve existing connections well, worker_connections can not be used, because it also affects connections with proxied servers. Is there a way to implement these measures in Nginx configuration? _______________________________________________ nginx mailing list nginx at nginx.org https://mailman.nginx.org/mailman/listinfo/nginx From l2dy at aosc.io Mon Nov 20 15:29:39 2023 From: l2dy at aosc.io (Zero King) Date: Mon, 20 Nov 2023 23:29:39 +0800 Subject: Limiting number of client TLS connections In-Reply-To: References: <489b61d6-d20e-429f-be71-cdc704d983a9@aosc.io> Message-ID: <09b447dd-35ab-4019-b89c-fa6a8c6e0543@aosc.io> Hi Maxim, Thanks for your reply! In our case, layer-4 firewall is difficult to introduce in the request path. Would you consider rate limiting in Nginx a valid feature request? On 19/11/23 08:11, Maxim Dounin wrote: > Hello! > > On Sat, Nov 18, 2023 at 02:44:20PM +0800, Zero King wrote: > >> I want Nginx to limit the rate of new TLS connections and the total (or >> per-worker) number of all client-facing connections, so that under a >> sudden surge of requests, existing connections can get enough share of >> CPU to be served properly, while excessive connections are rejected and >> retried against other servers in the cluster. >> >> I am running Nginx on a managed Kubernetes cluster, so tuning kernel >> parameters or configuring layer 4 firewall is not an option. >> >> To serve existing connections well, worker_connections can not be used, >> because it also affects connections with proxied servers. >> >> Is there a way to implement these measures in Nginx configuration? > No, nginx does not provide a way to limit rate of new connections > and/or total number of established connections. Instead, firewall is > expected to be used for such tasks. > From mdounin at mdounin.ru Tue Nov 21 20:16:50 2023 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 21 Nov 2023 23:16:50 +0300 Subject: Limiting number of client TLS connections In-Reply-To: <09b447dd-35ab-4019-b89c-fa6a8c6e0543@aosc.io> References: <489b61d6-d20e-429f-be71-cdc704d983a9@aosc.io> <09b447dd-35ab-4019-b89c-fa6a8c6e0543@aosc.io> Message-ID: Hello! On Mon, Nov 20, 2023 at 11:29:39PM +0800, Zero King wrote: > In our case, layer-4 firewall is difficult to introduce in the request > path. Would you consider rate limiting in Nginx a valid feature request? Firewall is expected to be much more effective solution compared to nginx (which has to work with already established connections at the application level). It might be a better idea to actually introduce a firewall if you need such limits (or, rather, make it possible to configure the one most likely already present). -- Maxim Dounin http://mdounin.ru/ From l2dy at aosc.io Sat Nov 25 08:03:37 2023 From: l2dy at aosc.io (Zero King) Date: Sat, 25 Nov 2023 16:03:37 +0800 Subject: Limiting number of client TLS connections In-Reply-To: References: <489b61d6-d20e-429f-be71-cdc704d983a9@aosc.io> Message-ID: Hi Jordan, Thanks for your suggestion. I will give it a try and also try to push our K8s team to implement a firewall if possible. On 20/11/23 10:33, J Carter wrote: > Hello, > > A self contained solution would be to double proxy, first through nginx stream server and then locally back to nginx http server (with proxy_pass via unix socket, or to localhost on a different port). > > You can implement your own custom rate limiting logic in the stream server with NJS (js_access) and use the new js_shared_dict_zone (which is shared between workers) for persistently storing rate calculations. > > You'd have additional overhead from the stream tcp proxy and the njs, but it shouldn't be too great (at least compared to overhead of TLS handshakes). > > Regards, > Jordan Carter. > > ________________________________________ > From: nginx on behalf of Zero King > Sent: Saturday, November 18, 2023 6:44 AM > To: nginx at nginx.org > Subject: Limiting number of client TLS connections > > Hi all, > > I want Nginx to limit the rate of new TLS connections and the total (or > per-worker) number of all client-facing connections, so that under a > sudden surge of requests, existing connections can get enough share of > CPU to be served properly, while excessive connections are rejected and > retried against other servers in the cluster. > > I am running Nginx on a managed Kubernetes cluster, so tuning kernel > parameters or configuring layer 4 firewall is not an option. > > To serve existing connections well, worker_connections can not be used, > because it also affects connections with proxied servers. > > Is there a way to implement these measures in Nginx configuration? > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx From jordanc.carter at outlook.com Sat Nov 25 22:55:10 2023 From: jordanc.carter at outlook.com (J Carter) Date: Sat, 25 Nov 2023 22:55:10 +0000 Subject: Limiting number of client TLS connections In-Reply-To: References: <489b61d6-d20e-429f-be71-cdc704d983a9@aosc.io> Message-ID: No problem at all :) One other suggestion if you do go down the double proxy + njs route. Keep an eye on the nginx-devel mailing list (or nginx release notes) for this patch series https://mailman.nginx.org/pipermail/nginx-devel/2023-November/QUTQYBNAHLMQMGTKQK57IXDXD23VVIQO.html The last patch in the series will make proxying from stream to http significantly more efficient, if merged. On Sat, 25 Nov 2023 16:03:37 +0800 Zero King wrote: > Hi Jordan, > > Thanks for your suggestion. I will give it a try and also try to push > our K8s team to implement a firewall if possible. > > On 20/11/23 10:33, J Carter wrote: > > Hello, > > > > A self contained solution would be to double proxy, first through nginx stream server > > and then locally back to nginx http server (with proxy_pass via unix socket, or to > > localhost on a different port). > > > > You can implement your own custom rate limiting logic in the stream server with NJS > > (js_access) and use the new js_shared_dict_zone (which is shared between workers) for > > persistently storing rate calculations. > > > > You'd have additional overhead from the stream tcp proxy and the njs, but it > > shouldn't be too great (at least compared to overhead of TLS handshakes). > > > > Regards, > > Jordan Carter. > > > > ________________________________________ > > From: nginx on behalf of Zero King > > Sent: Saturday, November 18, 2023 6:44 AM > > To: nginx at nginx.org > > Subject: Limiting number of client TLS connections > > > > Hi all, > > > > I want Nginx to limit the rate of new TLS connections and the total (or > > per-worker) number of all client-facing connections, so that under a > > sudden surge of requests, existing connections can get enough share of > > CPU to be served properly, while excessive connections are rejected and > > retried against other servers in the cluster. > > > > I am running Nginx on a managed Kubernetes cluster, so tuning kernel > > parameters or configuring layer 4 firewall is not an option. > > > > To serve existing connections well, worker_connections can not be used, > > because it also affects connections with proxied servers. > > > > Is there a way to implement these measures in Nginx configuration? > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > https://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > https://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx From kaushalshriyan at gmail.com Mon Nov 27 19:09:47 2023 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Tue, 28 Nov 2023 00:39:47 +0530 Subject: Disable http_dav_module in Nginx Web server (version nginx/1.24.0) Message-ID: Hi, I am running nginx version: nginx/1.24.0 on Red Hat Enterprise Linux release 8.8 (Ootpa). Is there a way to disable http_dav_module in Nginx Web server? # nginx -v nginx version: nginx/1.24.0 # cat /etc/redhat-release Red Hat Enterprise Linux release 8.8 (Ootpa). # # nginx -V 2>&1 | grep http_dav_module configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection -fPIC' --with-ld-opt='-Wl,-z,relro -Wl,-z,now -pie' Please guide me. Thanks in Advance. Best Regards, Kaushal -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Nov 27 21:45:01 2023 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 28 Nov 2023 00:45:01 +0300 Subject: Disable http_dav_module in Nginx Web server (version nginx/1.24.0) In-Reply-To: References: Message-ID: Hello! On Tue, Nov 28, 2023 at 12:39:47AM +0530, Kaushal Shriyan wrote: > I am running nginx version: nginx/1.24.0 on Red Hat Enterprise Linux > release 8.8 (Ootpa). Is there a way to disable http_dav_module in Nginx Web > server? The DAV module is disabled by default, unless you've explicitly enabled it in nginx configuration with the dav_methods directive (http://nginx.org/r/dav_methods). If you additionally want nginx without the DAV module compiled in, recompile nginx without the "--with-http_dav_module" configure option. -- Maxim Dounin http://mdounin.ru/ From osa at freebsd.org.ru Mon Nov 27 21:47:09 2023 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Tue, 28 Nov 2023 00:47:09 +0300 Subject: Disable http_dav_module in Nginx Web server (version nginx/1.24.0) In-Reply-To: References: Message-ID: Hi Kaushal, hope you're doing well. Would you mind to provide your fillings and concerns, if any, on the ngx_http_dav module. It's definitely possible to use the build scripts, available in the pkg-oss repo, [1], update configure options and rebuild the package for your needs. References ---------- 1. https://hg.nginx.org/pkg-oss/ Thank you. -- Sergey A. Osokin On Tue, Nov 28, 2023 at 12:39:47AM +0530, Kaushal Shriyan wrote: > Hi, > > I am running nginx version: nginx/1.24.0 on Red Hat Enterprise Linux > release 8.8 (Ootpa). Is there a way to disable http_dav_module in Nginx Web > server? > > # nginx -v > nginx version: nginx/1.24.0 > # cat /etc/redhat-release > Red Hat Enterprise Linux release 8.8 (Ootpa). > # > # nginx -V 2>&1 | grep http_dav_module > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx > --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf > --error-log-path=/var/log/nginx/error.log > --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid > --lock-path=/var/run/nginx.lock > --http-client-body-temp-path=/var/cache/nginx/client_temp > --http-proxy-temp-path=/var/cache/nginx/proxy_temp > --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp > --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp > --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx > --with-compat --with-file-aio --with-threads --with-http_addition_module > --with-http_auth_request_module --with-http_dav_module > --with-http_flv_module --with-http_gunzip_module > --with-http_gzip_static_module --with-http_mp4_module > --with-http_random_index_module --with-http_realip_module > --with-http_secure_link_module --with-http_slice_module > --with-http_ssl_module --with-http_stub_status_module > --with-http_sub_module --with-http_v2_module --with-mail > --with-mail_ssl_module --with-stream --with-stream_realip_module > --with-stream_ssl_module --with-stream_ssl_preread_module > --with-cc-opt='-O2 -g -pipe -Wall -Werror=format-security > -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions > -fstack-protector-strong -grecord-gcc-switches > -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 > -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic > -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection > -fPIC' --with-ld-opt='-Wl,-z,relro -Wl,-z,now -pie' > > Please guide me. Thanks in Advance. > > Best Regards, > > Kaushal > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx From kaushalshriyan at gmail.com Tue Nov 28 16:49:41 2023 From: kaushalshriyan at gmail.com (Kaushal Shriyan) Date: Tue, 28 Nov 2023 22:19:41 +0530 Subject: Disable http_dav_module in Nginx Web server (version nginx/1.24.0) In-Reply-To: References: Message-ID: Hi On Tue, Nov 28, 2023 at 3:17 AM Sergey A. Osokin wrote: > Hi Kaushal, > > hope you're doing well. > > Would you mind to provide your fillings and concerns, if any, on the > ngx_http_dav module. > > It's definitely possible to use the build scripts, available in the > pkg-oss repo, [1], update configure options and rebuild the package > for your needs. > > References > ---------- > 1. https://hg.nginx.org/pkg-oss/ > > Thank you. > > -- > Sergey A. Osokin > > On Tue, Nov 28, 2023 at 12:39:47AM +0530, Kaushal Shriyan wrote: > > Hi, > > > > I am running nginx version: nginx/1.24.0 on Red Hat Enterprise Linux > > release 8.8 (Ootpa). Is there a way to disable http_dav_module in Nginx > Web > > server? > > > > # nginx -v > > nginx version: nginx/1.24.0 > > # cat /etc/redhat-release > > Red Hat Enterprise Linux release 8.8 (Ootpa). > > # > > # nginx -V 2>&1 | grep http_dav_module > > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx > > --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf > > --error-log-path=/var/log/nginx/error.log > > --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid > > --lock-path=/var/run/nginx.lock > > --http-client-body-temp-path=/var/cache/nginx/client_temp > > --http-proxy-temp-path=/var/cache/nginx/proxy_temp > > --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp > > --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp > > --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx > --group=nginx > > --with-compat --with-file-aio --with-threads --with-http_addition_module > > --with-http_auth_request_module --with-http_dav_module > > --with-http_flv_module --with-http_gunzip_module > > --with-http_gzip_static_module --with-http_mp4_module > > --with-http_random_index_module --with-http_realip_module > > --with-http_secure_link_module --with-http_slice_module > > --with-http_ssl_module --with-http_stub_status_module > > --with-http_sub_module --with-http_v2_module --with-mail > > --with-mail_ssl_module --with-stream --with-stream_realip_module > > --with-stream_ssl_module --with-stream_ssl_preread_module > > --with-cc-opt='-O2 -g -pipe -Wall -Werror=format-security > > -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions > > -fstack-protector-strong -grecord-gcc-switches > > -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 > > -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic > > -fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection > > -fPIC' --with-ld-opt='-Wl,-z,relro -Wl,-z,now -pie' > > > > Please guide me. Thanks in Advance. > > > > Best Regards, > > > > Kaushal > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > https://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx Hi Sergey, I am working with an enterprise customer in financial domain. Their security team have suggested is the below recommendation. ############################################################################################################ 2.1.2 Ensure HTTP WebDAV module is not installed (Automated) Profile Applicability: • Level 2 - Webserver • Level 2 - Proxy • Level 2 – Loadbalancer Description: The http_dav_module enables HTTP Extensions for Web Distributed Authoring and Versioning (WebDAV) as defined by RFC 4918. This enables file-based operations on your web server, such as the ability to create, delete, change and move files on your server. Most modern architectures have replaced this functionality with cloud-based object storage, in which case the module should not be installed. Rationale: WebDAV functionality opens up an unnecessary path for exploiting your web server. Through misconfigurations of WebDAV operations, an attacker may be able to access and manipulate files on the server. Audit: Run the following command to ensure the http_dav_module is not installed: nginx -V 2>&1 | grep http_dav_module Ensure the output of the command is empty. Remediation: To remove the http_dav_module, recompile nginx from source without the -- withhttp_dav_module flag. Default Value: The HTTP WebDAV module is not installed by default when installing from source. It does come by default when installed using dnf. ############################################################################################################ Please guide me further. Thanks in advance. Best Regards, Kaushal -------------- next part -------------- An HTML attachment was scrubbed... URL: From teward at thomas-ward.net Wed Nov 29 00:42:59 2023 From: teward at thomas-ward.net (Thomas Ward) Date: Tue, 28 Nov 2023 19:42:59 -0500 Subject: Disable http_dav_module in Nginx Web server (version nginx/1.24.0) In-Reply-To: References: Message-ID: <86f22672-7eb8-4f3e-8673-c108e97596be@thomas-ward.net> Kaushal, The answer from Sergey is actaully accurate.  You'd have to modify the build scripts to exclude the webdav module and then recompile the NGINX packaging for your environment.  This is not *hard* but requires more knowledge than just NGINX to provide a solution that fits your organization.  The pkg-oss repo that Sergey provided a link to provides the baseline components necessary to build the open source packages that can be used by your system. You would have to create your own RHEL packages based off the pkg-oss repository and then build those packages and install them on your corresponding infrastructure.  That will, however, disable the ability for you to get updates via the RHEL repositories. Where did you client get the 'recommendation' from?  Generally speaking, most security teams aren't going to be wanting to manually build software independently because that can cause issues with security updates.  Aditionally, unless WebDAV is enabled in your environment (read: *enabled*, not whether installed or not), it shouldn't be doing anything.  You can also just disable webdav by giving zero access with a single line which then blocks all WebDAV routes. Also, additionally, refer to this: http://nginx.org/en/docs/http/ngx_http_dav_module.html Specifically, the webdav system / module does NOT intercept methods and do WebDAV stuff unless the configuration is set to. The defaults for the webdav module specify this for the dav methods (which in turn tells the module when to actually do something or not with the HTTP method received and in turn processing that as WebDAV): dav_methods off; When dav_methods is off, which is the default unless you manually set it otherwise, all methods are denied to the WebDAV module, per the documentation of that directive:  "Allows the specified HTTP and WebDAV methods. The parameter |off| denies all methods processed by this module." You may want to inform your clients' security team the following: "In order to disable this module, we would have to manually compile the software for your environment, which means that you will no longer receive security updates, etc. from the RHEL team or repositories.  Additionally, documentation on this module states that the default setup for this module is to be **disabled** regardless of whether this is compiled into the binaries or not.  If you really want this module disabled, we will have to manually compile NGINX for all your machines, and it will then be up to you to apply patches from NGINX for security vulnerabilities and issues yourselves." This achieves the following: (1) Indicates to your clients that you've researched this issue, (2) Indicated to your clients that, as you've done your research, you've identified that in order to change the compiled-in modules you would be required to manually do this per machine and break security patches from RHEL, and (3) During your research, it was uncovered that the presence of this module does not by default enable WebDAV functionality, thereby eliminating the security risk unless one of your administrators configures the WebDAV module for a given site. It also lets their team determine whether they really want to take on the "manually recompile from source every patch" burden, and also that their security concerns are mitigated because the webdav methods are disabled by default. Thomas --- Thomas Ward IT Security Professional NGINX Package Maintainer, Debian NGINX Package Watcher/Maintainer/Helper, Ubuntu On 11/28/23 11:49, Kaushal Shriyan wrote: > Hi > > On Tue, Nov 28, 2023 at 3:17 AM Sergey A. Osokin > wrote: > > Hi Kaushal, > > hope you're doing well. > > Would you mind to provide your fillings and concerns, if any, on the > ngx_http_dav module. > > It's definitely possible to use the build scripts, available in the > pkg-oss repo, [1], update configure options and rebuild the package > for your needs. > > References > ---------- > 1. https://hg.nginx.org/pkg-oss/ > > Thank you. > > -- > Sergey A. Osokin > > On Tue, Nov 28, 2023 at 12:39:47AM +0530, Kaushal Shriyan wrote: > > Hi, > > > > I am running nginx version: nginx/1.24.0 on Red Hat Enterprise Linux > > release 8.8 (Ootpa). Is there a way to disable http_dav_module > in Nginx Web > > server? > > > > # nginx -v > > nginx version: nginx/1.24.0 > > # cat /etc/redhat-release > > Red Hat Enterprise Linux release 8.8 (Ootpa). > > # > > # nginx -V 2>&1 | grep http_dav_module > > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx > > --modules-path=/usr/lib64/nginx/modules > --conf-path=/etc/nginx/nginx.conf > > --error-log-path=/var/log/nginx/error.log > > --http-log-path=/var/log/nginx/access.log > --pid-path=/var/run/nginx.pid > > --lock-path=/var/run/nginx.lock > > --http-client-body-temp-path=/var/cache/nginx/client_temp > > --http-proxy-temp-path=/var/cache/nginx/proxy_temp > > --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp > > --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp > > --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx > --group=nginx > > --with-compat --with-file-aio --with-threads > --with-http_addition_module > > --with-http_auth_request_module --with-http_dav_module > > --with-http_flv_module --with-http_gunzip_module > > --with-http_gzip_static_module --with-http_mp4_module > > --with-http_random_index_module --with-http_realip_module > > --with-http_secure_link_module --with-http_slice_module > > --with-http_ssl_module --with-http_stub_status_module > > --with-http_sub_module --with-http_v2_module --with-mail > > --with-mail_ssl_module --with-stream --with-stream_realip_module > > --with-stream_ssl_module --with-stream_ssl_preread_module > > --with-cc-opt='-O2 -g -pipe -Wall -Werror=format-security > > -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS -fexceptions > > -fstack-protector-strong -grecord-gcc-switches > > -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 > > -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic > > -fasynchronous-unwind-tables -fstack-clash-protection > -fcf-protection > > -fPIC' --with-ld-opt='-Wl,-z,relro -Wl,-z,now -pie' > > > > Please guide me. Thanks in Advance. > > > > Best Regards, > > > > Kaushal > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > https://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx > > > Hi Sergey, > > I am working with an enterprise customer in financial domain. Their > security team have suggested is the below recommendation. > > ############################################################################################################ > 2.1.2 Ensure HTTP WebDAV module is not installed (Automated) > Profile Applicability: > • Level 2 - Webserver > • Level 2 - Proxy > • Level 2 – Loadbalancer > Description: > The http_dav_module enables HTTP Extensions for Web Distributed > Authoring and Versioning > (WebDAV) as defined by RFC 4918. This enables file-based operations on > your web server, such > as the ability to create, delete, change and move files on your > server. Most modern > architectures have replaced this functionality with cloud-based object > storage, in which case > the module should not be installed. > Rationale: > WebDAV functionality opens up an unnecessary path for exploiting your > web server. Through > misconfigurations of WebDAV operations, an attacker may be able to > access and manipulate > files on the server. > Audit: > Run the following command to ensure the http_dav_module is not installed: > nginx -V 2>&1 | grep http_dav_module > > Ensure the output of the command is empty. > Remediation: > To remove the http_dav_module, recompile nginx from source without the -- > withhttp_dav_module flag. > Default Value: > The HTTP WebDAV module is not installed by default when installing > from source. It does come > by default when installed using dnf. > ############################################################################################################ > Please guide me further.  Thanks in advance. > > Best Regards, > > Kaushal > > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: