<div dir="ltr">Dear Francis,<br><br>This is my following email, I just want to add something into my previous email. <div><br>My concern is why nginx still gives 401 responses <b>unless </b>my nginx.conf has a basic authentication user name and password file in the location /etc/nginx/.htpasswd.<br><br>It says still not authenticate my external client POST requests yet ? Any thoughts?<br><br>Below is the screenshot of the /var/log/nginx/access.log file output.<br><img src="cid:ii_kp9u6p7b0" alt="image.png" width="472" height="263"><br><br>Thank you <br><br>Amila<br><br><br><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, May 29, 2021 at 7:12 PM <<a href="mailto:nginx-request@nginx.org">nginx-request@nginx.org</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Send nginx mailing list submissions to<br>
<a href="mailto:nginx@nginx.org" target="_blank">nginx@nginx.org</a><br>
<br>
To subscribe or unsubscribe via the World Wide Web, visit<br>
<a href="http://mailman.nginx.org/mailman/listinfo/nginx" rel="noreferrer" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx</a><br>
or, via email, send a message with subject or body 'help' to<br>
<a href="mailto:nginx-request@nginx.org" target="_blank">nginx-request@nginx.org</a><br>
<br>
You can reach the person managing the list at<br>
<a href="mailto:nginx-owner@nginx.org" target="_blank">nginx-owner@nginx.org</a><br>
<br>
When replying, please edit your Subject line so it is more specific<br>
than "Re: Contents of nginx digest..."<br>
<br>
<br>
Today's Topics:<br>
<br>
1. Re: Help: Using Nginx Reverse Proxy bypass traffic in to a<br>
application running in a container (Amila Gunathilaka)<br>
<br>
<br>
----------------------------------------------------------------------<br>
<br>
Message: 1<br>
Date: Sat, 29 May 2021 19:11:38 +0530<br>
From: Amila Gunathilaka <<a href="mailto:amila.kdam@gmail.com" target="_blank">amila.kdam@gmail.com</a>><br>
To: <a href="mailto:nginx@nginx.org" target="_blank">nginx@nginx.org</a>, <a href="mailto:francis@daoine.org" target="_blank">francis@daoine.org</a><br>
Subject: Re: Help: Using Nginx Reverse Proxy bypass traffic in to a<br>
application running in a container<br>
Message-ID:<br>
<<a href="mailto:CALqQtdy2RvkhHjAYQkZU-EhzOZG%2B9fnG-GXE9wuWzM-SQTbNjg@mail.gmail.com" target="_blank">CALqQtdy2RvkhHjAYQkZU-EhzOZG+9fnG-GXE9wuWzM-SQTbNjg@mail.gmail.com</a>><br>
Content-Type: text/plain; charset="utf-8"<br>
<br>
Hi Francis,<br>
<br>
Thanks for your reply email, here are my findings and progress and current<br>
situation about my issue. First of all I will answer your questions and<br>
then you will have a better idea to guide me and whether I'm on the correct<br>
path.<br>
<br>
>As I understand it, the load balancer is making the request "OPTIONS /"<br>
>to nginx, and nginx is responding with a http 405, and you don't want<br>
>nginx to do that.<br>
<br>
>What response do you want nginx to give to the request?<br>
<br>
Yes you are absolutely right I wanted nginx to stop that 405 response and<br>
give the success response 200 or even 401 which I can confirm my proxy pass<br>
and basic auth is working.<br>
<br>
Also I think that 405 response is coming *from nginx itself *to the<br>
external load balancer because external load balancer directly<br>
communicating with the nginx (80) and also my upstream server (9091 port<br>
server) is not a webapp it's just a binary file running inside docker<br>
container. But anyway maybe it's coming from the 9091 server app. You<br>
could investigate the 9091 server app here if you are interested -<br>
<a href="https://hub.docker.com/r/prom/pushgateway/tags?page=1&ordering=last_updated" rel="noreferrer" target="_blank">https://hub.docker.com/r/prom/pushgateway/tags?page=1&ordering=last_updated</a> .<br>
Let me know me as well, weather it's the 9091 app or nginx causing problems<br>
if you can find out.<br>
<br>
<br>
Anyway I thought to fix the OPTIONS method fix on the external load<br>
balancer itself , and I logged in to my external load balancer configs<br>
page and I changed the HTTP health checks using OPTIONS into *GET *<br>
method.<br>
ANd yeah now 405 error gone. But now I'm getting 401 responses , which<br>
should be the correct response since I'm using a basic auth in my<br>
nginx.conf file. Below is my nginx.conf FYI<br>
<br>
worker_rlimit_nofile 30000;<br>
events {<br>
worker_connections 30000;<br>
}<br>
<br>
http {<br>
<br>
#upstream pushgateway_upstreams {<br>
# server <a href="http://127.0.0.1:9091" rel="noreferrer" target="_blank">127.0.0.1:9091</a>;<br>
# }<br>
server {<br>
listen <a href="http://172.25.234.105:80" rel="noreferrer" target="_blank">172.25.234.105:80</a>;<br>
#server_name 172.25.234.105;<br>
location /metrics {<br>
proxy_pass <a href="http://127.0.0.1:9091/metrics" rel="noreferrer" target="_blank">http://127.0.0.1:9091/metrics</a>;<br>
auth_basic "PROMETHEUS PUSHGATEWAY Login Area";<br>
auth_basic_user_file /etc/nginx/.htpasswd;<br>
}<br>
<br>
}<br>
}<br>
<br>
<br>
* So I can confirm that proxy_pass is working because when I browse my<br>
application it returns the 401 message now.<br>
<br>
<br>
*curl -v <a href="http://172.25.234.105:80/metrics" rel="noreferrer" target="_blank">http://172.25.234.105:80/metrics</a><br>
<<a href="http://172.25.234.105:80/metrics" rel="noreferrer" target="_blank">http://172.25.234.105:80/metrics</a>>*<br>
* Trying 172.25.234.105:80...<br>
* TCP_NODELAY set<br>
* Connected to 172.25.234.105 (172.25.234.105) port 80 (#0)<br>
> GET /metrics HTTP/1.1<br>
> Host: 172.25.234.105<br>
> User-Agent: curl/7.68.0<br>
> Accept: */*<br>
><br>
* Mark bundle as not supporting multiuse<br>
< HTTP/1.1 401 Unauthorized<br>
< Server: nginx/1.18.0 (Ubuntu)<br>
< Date: Sat, 29 May 2021 13:29:00 GMT<br>
< Content-Type: text/html<br>
< Content-Length: 188<br>
< Connection: keep-alive<br>
< WWW-Authenticate: Basic realm="PROMETHEUS PUSHGATEWAY Login Area"<br>
<<br>
<html><br>
<head><title>*401 *Authorization Required</title></head><br>
<body><br>
<center><h1>*401* Authorization Required</h1></center><br>
<hr><center>nginx/1.18.0 (Ubuntu)</center><br>
</body><br>
</html><br>
<br>
<br>
Seems like everything is fine for now. Any questions or any enhancements<br>
are welcome. Thanks Francis.<br>
<br>
<br>
Thanks<br>
<br>
Amila<br>
Devops Engineer<br>
<br>
<br>
<br>
<br>
<br>
<br>
On Fri, May 28, 2021 at 1:30 PM <<a href="mailto:nginx-request@nginx.org" target="_blank">nginx-request@nginx.org</a>> wrote:<br>
<br>
> Send nginx mailing list submissions to<br>
> <a href="mailto:nginx@nginx.org" target="_blank">nginx@nginx.org</a><br>
><br>
> To subscribe or unsubscribe via the World Wide Web, visit<br>
> <a href="http://mailman.nginx.org/mailman/listinfo/nginx" rel="noreferrer" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx</a><br>
> or, via email, send a message with subject or body 'help' to<br>
> <a href="mailto:nginx-request@nginx.org" target="_blank">nginx-request@nginx.org</a><br>
><br>
> You can reach the person managing the list at<br>
> <a href="mailto:nginx-owner@nginx.org" target="_blank">nginx-owner@nginx.org</a><br>
><br>
> When replying, please edit your Subject line so it is more specific<br>
> than "Re: Contents of nginx digest..."<br>
><br>
><br>
> Today's Topics:<br>
><br>
> 1. Request for comments on Nginx configuration (mandela)<br>
> 2. How to do a large buffer size > 64k uWSGI requests with Nginx<br>
> proxy | uwsgi request is too big with nginx (Rai Mohammed)<br>
> 3. Unit 1.24.0 release (Valentin V. Bartenev)<br>
> 4. Re: How to do a large buffer size > 64k uWSGI requests with<br>
> Nginx proxy | uwsgi request is too big with nginx (Maxim Dounin)<br>
> 5. Re: How to do a large buffer size > 64k uWSGI requests with<br>
> Nginx proxy | uwsgi request is too big with nginx (Rai Mohammed)<br>
> 6. Re: Help: Using Nginx Reverse Proxy bypass traffic in to a<br>
> application running in a container (Francis Daly)<br>
><br>
><br>
> ----------------------------------------------------------------------<br>
><br>
> Message: 1<br>
> Date: Thu, 27 May 2021 14:55:03 -0400<br>
> From: "mandela" <<a href="mailto:nginx-forum@forum.nginx.org" target="_blank">nginx-forum@forum.nginx.org</a>><br>
> To: <a href="mailto:nginx@nginx.org" target="_blank">nginx@nginx.org</a><br>
> Subject: Request for comments on Nginx configuration<br>
> Message-ID:<br>
> <<br>
> <a href="mailto:978b154f0f2638185fd19dbcca86a663.NginxMailingListEnglish@forum.nginx.org" target="_blank">978b154f0f2638185fd19dbcca86a663.NginxMailingListEnglish@forum.nginx.org</a>><br>
><br>
> Content-Type: text/plain; charset=UTF-8<br>
><br>
> Hello all. I would like to have comments from the Nginx community on the<br>
> following configuration:<br>
><br>
> worker_processes auto;<br>
> error_log /var/www/log/nginx.log;<br>
><br>
> events {<br>
> multi_accept on;<br>
> worker_connections 16384;<br>
> }<br>
><br>
> http {<br>
> include nginx.deny;<br>
> include mime.types;<br>
> default_type application/octet-stream;<br>
> aio on;<br>
> sendfile on;<br>
> tcp_nopush on;<br>
> gzip on;<br>
> gzip_comp_level 6;<br>
> gzip_min_length 1024;<br>
> gzip_types<br>
> application/javascript<br>
> application/json<br>
> application/xml<br>
> image/svg+xml<br>
> image/x-icon<br>
> text/plain<br>
> text/css<br>
> text/xml<br>
> ;<br>
> lua_shared_dict dict 16k;<br>
> log_format main $time_iso8601<br>
> ' srs="$status"'<br>
> ' srt="$request_time"'<br>
> ' crl="$request"'<br>
> ' crh="$host"'<br>
> ' cad="$remote_addr"'<br>
> ' ssp="$server_port"'<br>
> ' scs="$upstream_cache_status"'<br>
> ' sua="$upstream_addr"'<br>
> ' suc="$upstream_connect_time"'<br>
> ' sut="$upstream_response_time"'<br>
> ' sgz="$gzip_ratio"'<br>
> ' sbs="$body_bytes_sent"'<br>
> ' cau="$remote_user"'<br>
> ' ccr="$connection_requests"'<br>
> ' ccp="$pipe"'<br>
> ' crs="$scheme"'<br>
> ' crm="$request_method"'<br>
> ' cru="$request_uri"'<br>
> ' crp="$server_protocol"'<br>
> ' chh="$http_host"'<br>
> ' cha="$http_user_agent"'<br>
> ' chr="$http_referer"'<br>
> ' chf="$http_x_forwarded_for"'<br>
> ;<br>
> server_tokens off;<br>
> reset_timedout_connection on;<br>
> access_log /var/www/log/access.log main;<br>
><br>
> fastcgi_cache main;<br>
> fastcgi_cache_key $host:$server_port$uri;<br>
> fastcgi_cache_methods GET HEAD;<br>
> fastcgi_ignore_headers Cache-Control Expires;<br>
> fastcgi_cache_path /tmp/nginx<br>
> levels=2:2<br>
> keys_zone=main:4m<br>
> inactive=24h<br>
> ;<br>
> ssl_certificate /etc/ssl/server.pem;<br>
> ssl_certificate_key /etc/ssl/server.key;<br>
> ssl_prefer_server_ciphers on;<br>
> ssl_session_cache shared:SSL:4m;<br>
> ssl_session_timeout 15m;<br>
><br>
> upstream upstream {<br>
> server unix:/tmp/php-fpm.sock;<br>
> server <a href="http://127.0.0.1:9000" rel="noreferrer" target="_blank">127.0.0.1:9000</a>;<br>
> server [::1]:9000;<br>
> }<br>
><br>
> map $http_origin $_origin {<br>
> default *;<br>
> '' '';<br>
> }<br>
><br>
> server {<br>
> listen 80;<br>
> return 301 https://$host$request_uri;<br>
> }<br>
><br>
> server {<br>
> listen 443 ssl http2;<br>
> include nginx.filter;<br>
> location / {<br>
> set $_v1 '';<br>
> set $_v2 '';<br>
> set $_v3 '';<br>
> rewrite_by_lua_block {<br>
> local dict = ngx.shared.dict<br>
> local host = ngx.var.host<br>
> local data = dict:get(host)<br>
> if data == nil then<br>
> local labels = {}<br>
> for s in host:gmatch('[^.]+') do<br>
> table.insert(labels, 1, s)<br>
> end<br>
> data = labels[1] or ''<br>
> local index = 2<br>
> while index <= #labels and #data <<br>
> 7 do<br>
> data = data .. '/' ..<br>
> labels[index]<br>
> index = index + 1<br>
> end<br>
> local f = '/usr/home/www/src/' ..<br>
> data .. '/app.php'<br>
> local _, _, code = os.rename(f, f)<br>
> if code == 2 then<br>
> return ngx.exit(404)<br>
> end<br>
> if labels[index] == 'cdn' then<br>
> data = data ..<br>
> '|/tmp/www/cdn/' .. data<br>
> else<br>
> data = data ..<br>
> '|/var/www/pub/'<br>
> ..<br>
> table.concat(labels, '/') .. '/-'<br>
> end<br>
> data = data .. '|' .. f<br>
> dict:add(host, data)<br>
> ngx.log(ngx.ERR,<br>
> 'dict:add('..host..','..data..')')<br>
> end<br>
> local i = 1<br>
> for s in data:gmatch('[^|]+') do<br>
> ngx.var["_v" .. i] = s<br>
> i = i + 1<br>
> end<br>
> }<br>
> alias /;<br>
> try_files<br>
> $_v2$uri<br>
> /var/www/pub/$_v1/!$uri<br>
> /var/www/pub/!$uri<br>
> @;<br>
> add_header Access-Control-Allow-Origin $_origin;<br>
> expires 28d;<br>
> }<br>
> location dir: {<br>
> alias /;<br>
> index :none;<br>
> autoindex on;<br>
> }<br>
> location file: {<br>
> alias /;<br>
> }<br>
> location @ {<br>
> fastcgi_param DOCUMENT_ROOT $_v2;<br>
> fastcgi_param SCRIPT_FILENAME $_v3;<br>
> fastcgi_param SCRIPT_NAME $fastcgi_script_name;<br>
> fastcgi_param SERVER_PROTOCOL $server_protocol;<br>
> fastcgi_param SERVER_ADDR $server_addr;<br>
> fastcgi_param SERVER_PORT $server_port;<br>
> fastcgi_param SERVER_NAME $host;<br>
> fastcgi_param REMOTE_ADDR $remote_addr;<br>
> fastcgi_param REMOTE_PORT $remote_port;<br>
> fastcgi_param REQUEST_SCHEME $scheme;<br>
> fastcgi_param REQUEST_METHOD $request_method;<br>
> fastcgi_param REQUEST_URI $request_uri;<br>
> fastcgi_param QUERY_STRING $query_string;<br>
> fastcgi_param CONTENT_TYPE $content_type;<br>
> fastcgi_param CONTENT_LENGTH $content_length;<br>
> fastcgi_pass upstream;<br>
> }<br>
> }<br>
> }<br>
><br>
> Posted at Nginx Forum:<br>
> <a href="https://forum.nginx.org/read.php?2,291674,291674#msg-291674" rel="noreferrer" target="_blank">https://forum.nginx.org/read.php?2,291674,291674#msg-291674</a><br>
><br>
><br>
><br>
> ------------------------------<br>
><br>
> Message: 2<br>
> Date: Thu, 27 May 2021 14:55:24 -0400<br>
> From: "Rai Mohammed" <<a href="mailto:nginx-forum@forum.nginx.org" target="_blank">nginx-forum@forum.nginx.org</a>><br>
> To: <a href="mailto:nginx@nginx.org" target="_blank">nginx@nginx.org</a><br>
> Subject: How to do a large buffer size > 64k uWSGI requests with Nginx<br>
> proxy | uwsgi request is too big with nginx<br>
> Message-ID:<br>
> <<br>
> <a href="mailto:c303563b562a6243138081c03cc4c7ff.NginxMailingListEnglish@forum.nginx.org" target="_blank">c303563b562a6243138081c03cc4c7ff.NginxMailingListEnglish@forum.nginx.org</a>><br>
><br>
> Content-Type: text/plain; charset=UTF-8<br>
><br>
> How to do a large buffer size > 64k uWSGI requests with Nginx proxy<br>
><br>
> Deployment stack :<br>
> Odoo ERP 12<br>
> Python 3.7.10 and Werkzeug 0.16.1 as backend<br>
> Nginx proxy : 1.20.0<br>
> uWSGI : 2.0.19.1<br>
> OS : FreeBSD 13.0-RELEASE<br>
><br>
> Nginx throw an alert from uwsgi of request is too big<br>
> Alert : uwsgi request is too big: 81492, client: 10.29.79.250, server:<br>
> odoo12ce-erp, request: "GET /web/webclient/..........."<br>
><br>
> As you can see I increased the "uwsgi_buffer_size " in both uwsgi.ini and<br>
> nginx.conf.<br>
><br>
> Nginx config :<br>
> `{<br>
><br>
> # increase the size of the buffers to handle odoo data<br>
> # Activate uwsgi_buffering<br>
> uwsgi_buffering on;<br>
> uwsgi_buffers 16 128k;<br>
> uwsgi_buffer_size 128k;<br>
> uwsgi_busy_buffers_size 256k;<br>
> # uwsgi_max_temp_file_size with zero value disables buffering of<br>
> responses to temporary files<br>
> uwsgi_max_temp_file_size 0;<br>
> uwsgi_temp_file_write_size 256k;<br>
><br>
> uwsgi_read_timeout 900s;<br>
> uwsgi_connect_timeout 900s;<br>
> uwsgi_send_timeout 900s;<br>
><br>
> }`<br>
><br>
> uwsgi.ini config :<br>
><br>
> `<br>
><br>
> [uwsgi]<br>
> strict = true<br>
> pcre-jit = true<br>
> #limit-as = 1024<br>
> #never-swap = true<br>
><br>
> pidfile = /var/run/odoo_erp/odoo12ce_uwsgi.pid<br>
> # safe-pidfile = /var/run/odoo_erp/odoo12ce.pid<br>
><br>
> # Enable REUSE_PORT flag on socket to allow multiple instances binding<br>
> on the same address (BSD only).<br>
> reuse-port = true<br>
><br>
> # Testing with www or odoo12ce<br>
> uid = odoo12ce<br>
> gid = odoo12ce<br>
><br>
> # To test and verification<br>
> callable = application<br>
> # To test and verification<br>
> #module = odoo.service.wsgi_server:application<br>
><br>
> # enable uwsgi master process<br>
> master = true<br>
> lazy = true<br>
> lazy-apps=true<br>
><br>
> # turn on memory usage report<br>
> #memory-report=true<br>
><br>
> enable-threads = true<br>
> threads = 2<br>
> thunder-lock = true<br>
> so-keepalive = true<br>
><br>
> buffer-size = 262144<br>
> http-buffer-size = 262144<br>
><br>
> response-headers-limit = 262144<br>
> http-headers-timeout = 900<br>
> # set max connections to 1024 in uWSGI<br>
> listen = 1024<br>
><br>
> so-send-timeout = 900<br>
> socket-send-timeout = 900<br>
> so-write-timeout = 900<br>
> socket-write-timeout = 900<br>
><br>
> http-timeout = 900<br>
> socket-timeout = 900<br>
><br>
> wsgi-accept-buffer = true<br>
> wsgi-accept-buffers = true<br>
> # clear environment on exit and Delete sockets during shutdown<br>
> vacuum = true<br>
> single-interpreter = true<br>
><br>
> # Shutdown when receiving SIGTERM (default is respawn)<br>
> die-on-term = true<br>
> need-app = true<br>
><br>
> # Disable built-in logging<br>
> disable-logging = false<br>
><br>
> # but log 4xx's and 5xx's anyway<br>
> log-4xx = true<br>
> log-5xx = true<br>
><br>
> # full path to Odoo12ce project's root directory<br>
> chdir = /odoo_erp/odoo12ce/odoo12ce_server<br>
> #chdir2 = = /odoo_erp/odoo12ce/odoo12ce_server<br>
><br>
> pythonpath = /odoo_erp/odoo12ce/odoo12ce_server<br>
><br>
> # odoo12ce's wsgi file<br>
> wsgi-file = /odoo_erp/odoo12ce/odoo12ce_server/setup/odoo12ce-uwsgi.py<br>
><br>
> #emperor = /odoo_erp/odoo12ce/vassals<br>
><br>
> uwsgi-socket = <a href="http://127.0.0.1:8070" rel="noreferrer" target="_blank">127.0.0.1:8070</a><br>
> uwsgi-socket = <a href="http://127.0.0.1:8170" rel="noreferrer" target="_blank">127.0.0.1:8170</a><br>
><br>
> # daemonize uwsgi and write messages into given log<br>
> daemonize = /var/log/odoo_erp/odoo12ce/odoo12ce_uwsgi_emperor.log<br>
><br>
> # Restart workers after this many requests<br>
> max-requests = 2000<br>
><br>
> # Restart workers after this many seconds<br>
> max-worker-lifetime = 3600<br>
><br>
> # Restart workers after this much resident memory<br>
> reload-on-rss = 2048<br>
><br>
> # How long to wait before forcefully killing workers<br>
> worker-reload-mercy = 90<br>
><br>
> # Maximum number of workers allowed (cpu * 2)<br>
> processes = 8<br>
><br>
> `<br>
><br>
> Posted at Nginx Forum:<br>
> <a href="https://forum.nginx.org/read.php?2,291675,291675#msg-291675" rel="noreferrer" target="_blank">https://forum.nginx.org/read.php?2,291675,291675#msg-291675</a><br>
><br>
><br>
><br>
> ------------------------------<br>
><br>
> Message: 3<br>
> Date: Thu, 27 May 2021 22:26:29 +0300<br>
> From: "Valentin V. Bartenev" <<a href="mailto:vbart@nginx.com" target="_blank">vbart@nginx.com</a>><br>
> To: <a href="mailto:nginx@nginx.org" target="_blank">nginx@nginx.org</a><br>
> Subject: Unit 1.24.0 release<br>
> Message-ID: <4650105.31r3eYUQgx@vbart-laptop><br>
> Content-Type: text/plain; charset="UTF-8"<br>
><br>
> Hi,<br>
><br>
> I'm glad to announce a new release of NGINX Unit.<br>
><br>
> This one is full of shiny new features. But before I dive into the<br>
> details,<br>
> let me introduce our new developers without whom this release wouldn't be<br>
> so<br>
> feature-rich. Please, welcome Zhidao Hong (???) and Ois?n Canty.<br>
><br>
> Zhidao has already been contributing to various nginx open-source projects<br>
> for<br>
> years as a community member, and I'm very excited to finally have him on<br>
> board.<br>
><br>
> Ois?n is a university student who's very interested in Unit; he joined our<br>
> dev<br>
> team as an intern and already shown solid coding skills, curiosity, and<br>
> attention to details, which is so important to our project. Good job!<br>
><br>
><br>
> Now, back to the features. I'd like to highlight the first of our<br>
> improvements<br>
> in serving static media assets.<br>
><br>
> :: MIME Type Filtering ::<br>
><br>
> Now, you can restrict file serving by MIME type:<br>
><br>
> {<br>
> "share": "/www/data",<br>
> "types": [ "image/*", "video/*" ]<br>
> }<br>
><br>
> The configuration above allows only files with various video and image<br>
> extensions, but all other requests will return status code 403.<br>
><br>
> In particular, this goes well with the "fallback" option that performs<br>
> another<br>
> action if the "share" returns a 40x error:<br>
><br>
> {<br>
> "share": "/www/data",<br>
> "types": [ "!application/x-httpd-php" ],<br>
><br>
> "fallback": {<br>
> "pass": "applications/php"<br>
> }<br>
> }<br>
><br>
> Here, all requests to existing files other than ".php" will be served as<br>
> static<br>
> content while the rest will be passed to a PHP application.<br>
><br>
> More examples and documentation snippets are available here:<br>
><br>
> - <a href="https://unit.nginx.org/configuration/#mime-filtering" rel="noreferrer" target="_blank">https://unit.nginx.org/configuration/#mime-filtering</a><br>
><br>
><br>
> :: Chrooting and Path Restrictions When Serving Files ::<br>
><br>
> As we take security seriously, now Unit introduces the ability to chroot<br>
> not only its application processes but also the static files it serves on<br>
> a per-request basis. Additionally, you can restrict traversal of mounting<br>
> points and symbolic link resolution:<br>
><br>
> {<br>
> "share": "/www/data/static/",<br>
> "chroot": "/www/data/",<br>
> "follow_symlinks": false,<br>
> "traverse_mounts": false<br>
> }<br>
><br>
> See here for more information:<br>
><br>
> - <a href="https://unit.nginx.org/configuration/#path-restrictions" rel="noreferrer" target="_blank">https://unit.nginx.org/configuration/#path-restrictions</a><br>
><br>
> For details of Unit application process isolation abilities:<br>
><br>
> - <a href="https://unit.nginx.org/configuration/#process-isolation" rel="noreferrer" target="_blank">https://unit.nginx.org/configuration/#process-isolation</a><br>
><br>
><br>
> Other notable features unrelated to static file serving:<br>
><br>
> * Multiple WSGI/ASGI Python entry points per process<br>
><br>
> It allows loading multiple modules or app entry points into a single<br>
> Python<br>
> process, choosing between them when handling requests with the full<br>
> power of<br>
> Unit's routes system.<br>
><br>
> See here for Python's "targets" object description:<br>
><br>
> - <a href="https://unit.nginx.org/configuration/#configuration-python-targets" rel="noreferrer" target="_blank">https://unit.nginx.org/configuration/#configuration-python-targets</a><br>
><br>
> And here, more info about Unit's internal routing:<br>
><br>
> - <a href="https://unit.nginx.org/configuration/#routes" rel="noreferrer" target="_blank">https://unit.nginx.org/configuration/#routes</a><br>
><br>
><br>
> * Automatic overloading of "http" and "websocket" modules in Node.js<br>
><br>
> Now you can run Node.js apps on Unit without touching their sources:<br>
><br>
> - <a href="https://unit.nginx.org/configuration/#node-js" rel="noreferrer" target="_blank">https://unit.nginx.org/configuration/#node-js</a><br>
><br>
><br>
> * Applying OpenSSL configuration commands<br>
><br>
> Finally, you can control various TLS settings via OpenSSL's generic<br>
> configuration interface with all the dynamic power of Unit:<br>
><br>
> - <a href="https://unit.nginx.org/configuration/#ssl-tls-configuration" rel="noreferrer" target="_blank">https://unit.nginx.org/configuration/#ssl-tls-configuration</a><br>
><br>
><br>
> The full changelog for the release:<br>
><br>
> Changes with Unit 1.24.0 27 May<br>
> 2021<br>
><br>
> *) Change: PHP added to the default MIME type list.<br>
><br>
> *) Feature: arbitrary configuration of TLS connections via OpenSSL<br>
> commands.<br>
><br>
> *) Feature: the ability to limit static file serving by MIME types.<br>
><br>
> *) Feature: support for chrooting, rejecting symlinks, and rejecting<br>
> mount point traversal on a per-request basis when serving static<br>
> files.<br>
><br>
> *) Feature: a loader for automatically overriding the "http" and<br>
> "websocket" modules in Node.js.<br>
><br>
> *) Feature: multiple "targets" in Python applications.<br>
><br>
> *) Feature: compatibility with Ruby 3.0.<br>
><br>
> *) Bugfix: the router process could crash while closing a TLS<br>
> connection.<br>
><br>
> *) Bugfix: a segmentation fault might have occurred in the PHP module<br>
> if<br>
> fastcgi_finish_request() was used with the "auto_globals_jit" option<br>
> enabled.<br>
><br>
><br>
> That's all for today, but even more exciting features are poised for the<br>
> upcoming releases:<br>
><br>
> - statistics API<br>
> - process control API<br>
> - variables from regexp captures in the "match" object<br>
> - simple request rewrites using variables<br>
> - variables support in static file serving options<br>
> - ability to override client IP from the X-Forwarded-For header<br>
> - TLS sessions cache and tickets<br>
><br>
> Also, please check our GitHub to follow the development and discuss new<br>
> features:<br>
><br>
> - <a href="https://github.com/nginx/unit" rel="noreferrer" target="_blank">https://github.com/nginx/unit</a><br>
><br>
> Stay tuned!<br>
><br>
> wbr, Valentin V. Bartenev<br>
><br>
><br>
><br>
><br>
><br>
> ------------------------------<br>
><br>
> Message: 4<br>
> Date: Fri, 28 May 2021 02:53:53 +0300<br>
> From: Maxim Dounin <<a href="mailto:mdounin@mdounin.ru" target="_blank">mdounin@mdounin.ru</a>><br>
> To: <a href="mailto:nginx@nginx.org" target="_blank">nginx@nginx.org</a><br>
> Subject: Re: How to do a large buffer size > 64k uWSGI requests with<br>
> Nginx proxy | uwsgi request is too big with nginx<br>
> Message-ID: <<a href="mailto:YLAxEUEUyrvdUlQr@mdounin.ru" target="_blank">YLAxEUEUyrvdUlQr@mdounin.ru</a>><br>
> Content-Type: text/plain; charset=us-ascii<br>
><br>
> Hello!<br>
><br>
> On Thu, May 27, 2021 at 02:55:24PM -0400, Rai Mohammed wrote:<br>
><br>
> > How to do a large buffer size > 64k uWSGI requests with Nginx proxy<br>
> ><br>
> > Deployment stack :<br>
> > Odoo ERP 12<br>
> > Python 3.7.10 and Werkzeug 0.16.1 as backend<br>
> > Nginx proxy : 1.20.0<br>
> > uWSGI : 2.0.19.1<br>
> > OS : FreeBSD 13.0-RELEASE<br>
> ><br>
> > Nginx throw an alert from uwsgi of request is too big<br>
> > Alert : uwsgi request is too big: 81492, client: 10.29.79.250, server:<br>
> > odoo12ce-erp, request: "GET /web/webclient/..........."<br>
> ><br>
> > As you can see I increased the "uwsgi_buffer_size " in both uwsgi.ini and<br>
> > nginx.conf.<br>
><br>
> The uwsgi protocol uses 16-bit datasize field[1], and this limits<br>
> maximum size of all headers in a request to uwsgi backends. The<br>
> error message from nginx suggests you are hitting this limit.<br>
> Unfortunately, using larger buffers won't help here.<br>
><br>
> In most cases such a huge request headers indicate that there is a<br>
> bug somewhere. For example, nginx by default limits total size of<br>
> request headers to 32k (see [2]). Similar 64k limit also exists<br>
> in FastCGI (though with protocol clearly defining how to provide<br>
> additional data if needed, just not implemented in nginx), and the<br>
> only case when it was questioned was due to a miscoded client (see<br>
> [3]).<br>
><br>
> If nevertheless such a huge request headers are intentional, the<br>
> most simple solution probably would be to switch to a different<br>
> protocol, such as HTTP.<br>
><br>
> [1] <a href="https://uwsgi-docs.readthedocs.io/en/latest/Protocol.html" rel="noreferrer" target="_blank">https://uwsgi-docs.readthedocs.io/en/latest/Protocol.html</a><br>
> [2] <a href="http://nginx.org/r/large_client_header_buffers" rel="noreferrer" target="_blank">http://nginx.org/r/large_client_header_buffers</a><br>
> [3] <a href="https://trac.nginx.org/nginx/ticket/239" rel="noreferrer" target="_blank">https://trac.nginx.org/nginx/ticket/239</a><br>
><br>
> --<br>
> Maxim Dounin<br>
> <a href="http://mdounin.ru/" rel="noreferrer" target="_blank">http://mdounin.ru/</a><br>
><br>
><br>
> ------------------------------<br>
><br>
> Message: 5<br>
> Date: Thu, 27 May 2021 21:59:13 -0400<br>
> From: "Rai Mohammed" <<a href="mailto:nginx-forum@forum.nginx.org" target="_blank">nginx-forum@forum.nginx.org</a>><br>
> To: <a href="mailto:nginx@nginx.org" target="_blank">nginx@nginx.org</a><br>
> Subject: Re: How to do a large buffer size > 64k uWSGI requests with<br>
> Nginx proxy | uwsgi request is too big with nginx<br>
> Message-ID:<br>
> <<br>
> <a href="mailto:06b5de642b1389c66bdbe8e193824603.NginxMailingListEnglish@forum.nginx.org" target="_blank">06b5de642b1389c66bdbe8e193824603.NginxMailingListEnglish@forum.nginx.org</a>><br>
><br>
> Content-Type: text/plain; charset=UTF-8<br>
><br>
> Hello,<br>
> Yes I have searched the request that generates this big size header, and<br>
> it's a Get URI<br>
> pulling all the features installed and requested by the user of the ERP.<br>
><br>
> Before I integrate the uWSGI layer, the stack deployment with Nginx KTLS<br>
> HTTP2 works perfectly and there's<br>
> no problem of buffer sizing.<br>
> The reason why I added the uWSGI layer, is for using the uwsgi-socket<br>
> binary<br>
> protocol, and it work fine and very fast<br>
> decreasing the time load for the principal web page.<br>
> So for now I have to switch to the http-socket protocol and configuring<br>
> HTTP2 in uWSGI.<br>
> I hope in the future Nginx will allow using the huge headers sizing.<br>
><br>
> Thanks for your reply and clarifications.<br>
><br>
> Posted at Nginx Forum:<br>
> <a href="https://forum.nginx.org/read.php?2,291675,291680#msg-291680" rel="noreferrer" target="_blank">https://forum.nginx.org/read.php?2,291675,291680#msg-291680</a><br>
><br>
><br>
><br>
> ------------------------------<br>
><br>
> Message: 6<br>
> Date: Fri, 28 May 2021 09:00:00 +0100<br>
> From: Francis Daly <<a href="mailto:francis@daoine.org" target="_blank">francis@daoine.org</a>><br>
> To: <a href="mailto:nginx@nginx.org" target="_blank">nginx@nginx.org</a><br>
> Subject: Re: Help: Using Nginx Reverse Proxy bypass traffic in to a<br>
> application running in a container<br>
> Message-ID: <<a href="mailto:20210528080000.GC11167@daoine.org" target="_blank">20210528080000.GC11167@daoine.org</a>><br>
> Content-Type: text/plain; charset=us-ascii<br>
><br>
> On Tue, May 25, 2021 at 09:47:47PM +0530, Amila Gunathilaka wrote:<br>
><br>
> Hi there,<br>
><br>
> > I'm sorry for taking time to reply to this, you were so keen about my<br>
> > problem. Thank you.<br>
><br>
> No worries at all -- the mailing list is not an immediate-response medium.<br>
><br>
> > Actually my problem was when sending *response *to the load balancer from<br>
> > the nginx ( not the request, it should be corrected as the *response *in<br>
> my<br>
> > previous email).<br>
> > Such as my external load balancer is always doing a health check for my<br>
> > nginx port (80) , below is the *response *message in the<br>
> > /var/log/nginx/access.log against the health check request coming from<br>
> > the external-loadbalancer.<br>
><br>
> As I understand it, the load balancer is making the request "OPTIONS /"<br>
> to nginx, and nginx is responding with a http 405, and you don't want<br>
> nginx to do that.<br>
><br>
> What response do you want nginx to give to the request?<br>
><br>
> Your config make it look like nginx is told to proxy_pass the OPTIONS<br>
> request to your port 9091 server, so I presume that your port 9091 server<br>
> is responding 405 to the OPTIONS request and nginx is passing the response<br>
> from the 9091-upstream to the load-balancer client.<br>
><br>
> Your port 9091 logs or traffic analysis should show that that is the case.<br>
><br>
> If is the case, you *could* fix it by telling your 9091-upstream to respond<br>
> how you want it to to the "OPTIONS /" request (using its config); or<br>
> you could configure nginx to intercept the request and handle it itself,<br>
> without proxy_pass'ing it<br>
><br>
><br>
> The first case would mean that the "health check" is actually testing<br>
> the full nginx-to-upstream chain; the second would have it only testing<br>
> that nginx is responding.<br>
><br>
> If you decide that you want nginx to handle this request itself, and to<br>
> respond with a http 204, you could add something like<br>
><br>
> if ($request_method = "OPTIONS") { return 204; }<br>
><br>
> inside the "location /" block.<br>
><br>
> (Strictly: that would tell nginx to handle all "OPTIONS /anything"<br>
> requests, not just "OPTIONS /".)<br>
><br>
> You would not need the error_page directives that you show.<br>
><br>
><br>
> You could instead add a new "location = /" block, and do the OPTIONS<br>
> check there; but you would probably also have to duplicate the three<br>
> other lines from the "location /" block -- sometimes people prefer<br>
> "tidy-looking" configuration over "correctness and probable machine<br>
> efficiency". Pick which you like; if you do not measure a difference,<br>
> there is not a difference that you care about.<br>
><br>
> That is, you want either one location:<br>
><br>
> > server {<br>
> > listen 80;<br>
> > server_name 172.25.234.105;<br>
><br>
> > location / {<br>
><br>
> if ($request_method = "OPTIONS") { return 204; }<br>
><br>
> > proxy_pass <a href="http://127.0.0.1:9091" rel="noreferrer" target="_blank">http://127.0.0.1:9091</a>;<br>
> > auth_basic "PROMETHEUS PUSHGATEWAY Login Area";<br>
> > auth_basic_user_file /etc/nginx/.htpasswd;<br>
> > }<br>
> > }<br>
><br>
> or two locations:<br>
><br>
> location = / {<br>
> if ($request_method = "OPTIONS") { return 204; }<br>
> proxy_pass <a href="http://127.0.0.1:9091" rel="noreferrer" target="_blank">http://127.0.0.1:9091</a>;<br>
> auth_basic "PROMETHEUS PUSHGATEWAY Login Area";<br>
> auth_basic_user_file /etc/nginx/.htpasswd;<br>
> }<br>
><br>
> location / {<br>
> proxy_pass <a href="http://127.0.0.1:9091" rel="noreferrer" target="_blank">http://127.0.0.1:9091</a>;<br>
> auth_basic "PROMETHEUS PUSHGATEWAY Login Area";<br>
> auth_basic_user_file /etc/nginx/.htpasswd;<br>
> }<br>
><br>
> (and, if you use the two, you could potentially move the "auth_basic"<br>
> and "auth_basic_user_file" outside the "location", to be directly within<br>
> "server"; that does depend on what else is in your config file.)<br>
><br>
> If you want something else in the response to the OPTIONS request,<br>
> you can change the "return" response code, or "add_header" and the like.<br>
><br>
> Good luck with it,<br>
><br>
> f<br>
> --<br>
> Francis Daly <a href="mailto:francis@daoine.org" target="_blank">francis@daoine.org</a><br>
><br>
><br>
> ------------------------------<br>
><br>
> Subject: Digest Footer<br>
><br>
> _______________________________________________<br>
> nginx mailing list<br>
> <a href="mailto:nginx@nginx.org" target="_blank">nginx@nginx.org</a><br>
> <a href="http://mailman.nginx.org/mailman/listinfo/nginx" rel="noreferrer" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx</a><br>
><br>
> ------------------------------<br>
><br>
> End of nginx Digest, Vol 139, Issue 28<br>
> **************************************<br>
><br>
-------------- next part --------------<br>
An HTML attachment was scrubbed...<br>
URL: <<a href="http://mailman.nginx.org/pipermail/nginx/attachments/20210529/398430ff/attachment.htm" rel="noreferrer" target="_blank">http://mailman.nginx.org/pipermail/nginx/attachments/20210529/398430ff/attachment.htm</a>><br>
<br>
------------------------------<br>
<br>
Subject: Digest Footer<br>
<br>
_______________________________________________<br>
nginx mailing list<br>
<a href="mailto:nginx@nginx.org" target="_blank">nginx@nginx.org</a><br>
<a href="http://mailman.nginx.org/mailman/listinfo/nginx" rel="noreferrer" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx</a><br>
<br>
------------------------------<br>
<br>
End of nginx Digest, Vol 139, Issue 29<br>
**************************************<br>
</blockquote></div>