nginx Digest, Vol 139, Issue 29
Amila Gunathilaka
amila.kdam at gmail.com
Sat May 29 14:16:40 UTC 2021
Dear Francis,
This is my following email, I just want to add something into my
previous email.
My concern is why nginx still gives 401 responses *unless *my nginx.conf
has a basic authentication user name and password file in the
location /etc/nginx/.htpasswd.
It says still not authenticate my external client POST requests yet ? Any
thoughts?
Below is the screenshot of the /var/log/nginx/access.log file output.
[image: image.png]
Thank you
Amila
On Sat, May 29, 2021 at 7:12 PM <nginx-request at nginx.org> wrote:
> Send nginx mailing list submissions to
> nginx at nginx.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://mailman.nginx.org/mailman/listinfo/nginx
> or, via email, send a message with subject or body 'help' to
> nginx-request at nginx.org
>
> You can reach the person managing the list at
> nginx-owner at nginx.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of nginx digest..."
>
>
> Today's Topics:
>
> 1. Re: Help: Using Nginx Reverse Proxy bypass traffic in to a
> application running in a container (Amila Gunathilaka)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Sat, 29 May 2021 19:11:38 +0530
> From: Amila Gunathilaka <amila.kdam at gmail.com>
> To: nginx at nginx.org, francis at daoine.org
> Subject: Re: Help: Using Nginx Reverse Proxy bypass traffic in to a
> application running in a container
> Message-ID:
> <
> CALqQtdy2RvkhHjAYQkZU-EhzOZG+9fnG-GXE9wuWzM-SQTbNjg at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Hi Francis,
>
> Thanks for your reply email, here are my findings and progress and current
> situation about my issue. First of all I will answer your questions and
> then you will have a better idea to guide me and whether I'm on the correct
> path.
>
> >As I understand it, the load balancer is making the request "OPTIONS /"
> >to nginx, and nginx is responding with a http 405, and you don't want
> >nginx to do that.
>
> >What response do you want nginx to give to the request?
>
> Yes you are absolutely right I wanted nginx to stop that 405 response and
> give the success response 200 or even 401 which I can confirm my proxy pass
> and basic auth is working.
>
> Also I think that 405 response is coming *from nginx itself *to the
> external load balancer because external load balancer directly
> communicating with the nginx (80) and also my upstream server (9091 port
> server) is not a webapp it's just a binary file running inside docker
> container. But anyway maybe it's coming from the 9091 server app. You
> could investigate the 9091 server app here if you are interested -
> https://hub.docker.com/r/prom/pushgateway/tags?page=1&ordering=last_updated
> .
> Let me know me as well, weather it's the 9091 app or nginx causing problems
> if you can find out.
>
>
> Anyway I thought to fix the OPTIONS method fix on the external load
> balancer itself , and I logged in to my external load balancer configs
> page and I changed the HTTP health checks using OPTIONS into *GET *
> method.
> ANd yeah now 405 error gone. But now I'm getting 401 responses , which
> should be the correct response since I'm using a basic auth in my
> nginx.conf file. Below is my nginx.conf FYI
>
> worker_rlimit_nofile 30000;
> events {
> worker_connections 30000;
> }
>
> http {
>
> #upstream pushgateway_upstreams {
> # server 127.0.0.1:9091;
> # }
> server {
> listen 172.25.234.105:80;
> #server_name 172.25.234.105;
> location /metrics {
> proxy_pass http://127.0.0.1:9091/metrics;
> auth_basic "PROMETHEUS PUSHGATEWAY Login Area";
> auth_basic_user_file /etc/nginx/.htpasswd;
> }
>
> }
> }
>
>
> * So I can confirm that proxy_pass is working because when I browse my
> application it returns the 401 message now.
>
>
> *curl -v http://172.25.234.105:80/metrics
> <http://172.25.234.105:80/metrics>*
> * Trying 172.25.234.105:80...
> * TCP_NODELAY set
> * Connected to 172.25.234.105 (172.25.234.105) port 80 (#0)
> > GET /metrics HTTP/1.1
> > Host: 172.25.234.105
> > User-Agent: curl/7.68.0
> > Accept: */*
> >
> * Mark bundle as not supporting multiuse
> < HTTP/1.1 401 Unauthorized
> < Server: nginx/1.18.0 (Ubuntu)
> < Date: Sat, 29 May 2021 13:29:00 GMT
> < Content-Type: text/html
> < Content-Length: 188
> < Connection: keep-alive
> < WWW-Authenticate: Basic realm="PROMETHEUS PUSHGATEWAY Login Area"
> <
> <html>
> <head><title>*401 *Authorization Required</title></head>
> <body>
> <center><h1>*401* Authorization Required</h1></center>
> <hr><center>nginx/1.18.0 (Ubuntu)</center>
> </body>
> </html>
>
>
> Seems like everything is fine for now. Any questions or any enhancements
> are welcome. Thanks Francis.
>
>
> Thanks
>
> Amila
> Devops Engineer
>
>
>
>
>
>
> On Fri, May 28, 2021 at 1:30 PM <nginx-request at nginx.org> wrote:
>
> > Send nginx mailing list submissions to
> > nginx at nginx.org
> >
> > To subscribe or unsubscribe via the World Wide Web, visit
> > http://mailman.nginx.org/mailman/listinfo/nginx
> > or, via email, send a message with subject or body 'help' to
> > nginx-request at nginx.org
> >
> > You can reach the person managing the list at
> > nginx-owner at nginx.org
> >
> > When replying, please edit your Subject line so it is more specific
> > than "Re: Contents of nginx digest..."
> >
> >
> > Today's Topics:
> >
> > 1. Request for comments on Nginx configuration (mandela)
> > 2. How to do a large buffer size > 64k uWSGI requests with Nginx
> > proxy | uwsgi request is too big with nginx (Rai Mohammed)
> > 3. Unit 1.24.0 release (Valentin V. Bartenev)
> > 4. Re: How to do a large buffer size > 64k uWSGI requests with
> > Nginx proxy | uwsgi request is too big with nginx (Maxim Dounin)
> > 5. Re: How to do a large buffer size > 64k uWSGI requests with
> > Nginx proxy | uwsgi request is too big with nginx (Rai Mohammed)
> > 6. Re: Help: Using Nginx Reverse Proxy bypass traffic in to a
> > application running in a container (Francis Daly)
> >
> >
> > ----------------------------------------------------------------------
> >
> > Message: 1
> > Date: Thu, 27 May 2021 14:55:03 -0400
> > From: "mandela" <nginx-forum at forum.nginx.org>
> > To: nginx at nginx.org
> > Subject: Request for comments on Nginx configuration
> > Message-ID:
> > <
> > 978b154f0f2638185fd19dbcca86a663.NginxMailingListEnglish at forum.nginx.org
> >
> >
> > Content-Type: text/plain; charset=UTF-8
> >
> > Hello all. I would like to have comments from the Nginx community on the
> > following configuration:
> >
> > worker_processes auto;
> > error_log /var/www/log/nginx.log;
> >
> > events {
> > multi_accept on;
> > worker_connections 16384;
> > }
> >
> > http {
> > include nginx.deny;
> > include mime.types;
> > default_type application/octet-stream;
> > aio on;
> > sendfile on;
> > tcp_nopush on;
> > gzip on;
> > gzip_comp_level 6;
> > gzip_min_length 1024;
> > gzip_types
> > application/javascript
> > application/json
> > application/xml
> > image/svg+xml
> > image/x-icon
> > text/plain
> > text/css
> > text/xml
> > ;
> > lua_shared_dict dict 16k;
> > log_format main $time_iso8601
> > ' srs="$status"'
> > ' srt="$request_time"'
> > ' crl="$request"'
> > ' crh="$host"'
> > ' cad="$remote_addr"'
> > ' ssp="$server_port"'
> > ' scs="$upstream_cache_status"'
> > ' sua="$upstream_addr"'
> > ' suc="$upstream_connect_time"'
> > ' sut="$upstream_response_time"'
> > ' sgz="$gzip_ratio"'
> > ' sbs="$body_bytes_sent"'
> > ' cau="$remote_user"'
> > ' ccr="$connection_requests"'
> > ' ccp="$pipe"'
> > ' crs="$scheme"'
> > ' crm="$request_method"'
> > ' cru="$request_uri"'
> > ' crp="$server_protocol"'
> > ' chh="$http_host"'
> > ' cha="$http_user_agent"'
> > ' chr="$http_referer"'
> > ' chf="$http_x_forwarded_for"'
> > ;
> > server_tokens off;
> > reset_timedout_connection on;
> > access_log /var/www/log/access.log main;
> >
> > fastcgi_cache main;
> > fastcgi_cache_key $host:$server_port$uri;
> > fastcgi_cache_methods GET HEAD;
> > fastcgi_ignore_headers Cache-Control Expires;
> > fastcgi_cache_path /tmp/nginx
> > levels=2:2
> > keys_zone=main:4m
> > inactive=24h
> > ;
> > ssl_certificate /etc/ssl/server.pem;
> > ssl_certificate_key /etc/ssl/server.key;
> > ssl_prefer_server_ciphers on;
> > ssl_session_cache shared:SSL:4m;
> > ssl_session_timeout 15m;
> >
> > upstream upstream {
> > server unix:/tmp/php-fpm.sock;
> > server 127.0.0.1:9000;
> > server [::1]:9000;
> > }
> >
> > map $http_origin $_origin {
> > default *;
> > '' '';
> > }
> >
> > server {
> > listen 80;
> > return 301 https://$host$request_uri;
> > }
> >
> > server {
> > listen 443 ssl http2;
> > include nginx.filter;
> > location / {
> > set $_v1 '';
> > set $_v2 '';
> > set $_v3 '';
> > rewrite_by_lua_block {
> > local dict = ngx.shared.dict
> > local host = ngx.var.host
> > local data = dict:get(host)
> > if data == nil then
> > local labels = {}
> > for s in host:gmatch('[^.]+') do
> > table.insert(labels, 1,
> s)
> > end
> > data = labels[1] or ''
> > local index = 2
> > while index <= #labels and #data
> <
> > 7 do
> > data = data .. '/' ..
> > labels[index]
> > index = index + 1
> > end
> > local f = '/usr/home/www/src/' ..
> > data .. '/app.php'
> > local _, _, code = os.rename(f,
> f)
> > if code == 2 then
> > return ngx.exit(404)
> > end
> > if labels[index] == 'cdn' then
> > data = data ..
> > '|/tmp/www/cdn/' .. data
> > else
> > data = data ..
> > '|/var/www/pub/'
> > ..
> > table.concat(labels, '/') .. '/-'
> > end
> > data = data .. '|' .. f
> > dict:add(host, data)
> > ngx.log(ngx.ERR,
> > 'dict:add('..host..','..data..')')
> > end
> > local i = 1
> > for s in data:gmatch('[^|]+') do
> > ngx.var["_v" .. i] = s
> > i = i + 1
> > end
> > }
> > alias /;
> > try_files
> > $_v2$uri
> > /var/www/pub/$_v1/!$uri
> > /var/www/pub/!$uri
> > @;
> > add_header Access-Control-Allow-Origin $_origin;
> > expires 28d;
> > }
> > location dir: {
> > alias /;
> > index :none;
> > autoindex on;
> > }
> > location file: {
> > alias /;
> > }
> > location @ {
> > fastcgi_param DOCUMENT_ROOT $_v2;
> > fastcgi_param SCRIPT_FILENAME $_v3;
> > fastcgi_param SCRIPT_NAME
> $fastcgi_script_name;
> > fastcgi_param SERVER_PROTOCOL $server_protocol;
> > fastcgi_param SERVER_ADDR $server_addr;
> > fastcgi_param SERVER_PORT $server_port;
> > fastcgi_param SERVER_NAME $host;
> > fastcgi_param REMOTE_ADDR $remote_addr;
> > fastcgi_param REMOTE_PORT $remote_port;
> > fastcgi_param REQUEST_SCHEME $scheme;
> > fastcgi_param REQUEST_METHOD $request_method;
> > fastcgi_param REQUEST_URI $request_uri;
> > fastcgi_param QUERY_STRING $query_string;
> > fastcgi_param CONTENT_TYPE $content_type;
> > fastcgi_param CONTENT_LENGTH $content_length;
> > fastcgi_pass upstream;
> > }
> > }
> > }
> >
> > Posted at Nginx Forum:
> > https://forum.nginx.org/read.php?2,291674,291674#msg-291674
> >
> >
> >
> > ------------------------------
> >
> > Message: 2
> > Date: Thu, 27 May 2021 14:55:24 -0400
> > From: "Rai Mohammed" <nginx-forum at forum.nginx.org>
> > To: nginx at nginx.org
> > Subject: How to do a large buffer size > 64k uWSGI requests with Nginx
> > proxy | uwsgi request is too big with nginx
> > Message-ID:
> > <
> > c303563b562a6243138081c03cc4c7ff.NginxMailingListEnglish at forum.nginx.org
> >
> >
> > Content-Type: text/plain; charset=UTF-8
> >
> > How to do a large buffer size > 64k uWSGI requests with Nginx proxy
> >
> > Deployment stack :
> > Odoo ERP 12
> > Python 3.7.10 and Werkzeug 0.16.1 as backend
> > Nginx proxy : 1.20.0
> > uWSGI : 2.0.19.1
> > OS : FreeBSD 13.0-RELEASE
> >
> > Nginx throw an alert from uwsgi of request is too big
> > Alert : uwsgi request is too big: 81492, client: 10.29.79.250, server:
> > odoo12ce-erp, request: "GET /web/webclient/..........."
> >
> > As you can see I increased the "uwsgi_buffer_size " in both uwsgi.ini and
> > nginx.conf.
> >
> > Nginx config :
> > `{
> >
> > # increase the size of the buffers to handle odoo data
> > # Activate uwsgi_buffering
> > uwsgi_buffering on;
> > uwsgi_buffers 16 128k;
> > uwsgi_buffer_size 128k;
> > uwsgi_busy_buffers_size 256k;
> > # uwsgi_max_temp_file_size with zero value disables buffering of
> > responses to temporary files
> > uwsgi_max_temp_file_size 0;
> > uwsgi_temp_file_write_size 256k;
> >
> > uwsgi_read_timeout 900s;
> > uwsgi_connect_timeout 900s;
> > uwsgi_send_timeout 900s;
> >
> > }`
> >
> > uwsgi.ini config :
> >
> > `
> >
> > [uwsgi]
> > strict = true
> > pcre-jit = true
> > #limit-as = 1024
> > #never-swap = true
> >
> > pidfile = /var/run/odoo_erp/odoo12ce_uwsgi.pid
> > # safe-pidfile = /var/run/odoo_erp/odoo12ce.pid
> >
> > # Enable REUSE_PORT flag on socket to allow multiple instances
> binding
> > on the same address (BSD only).
> > reuse-port = true
> >
> > # Testing with www or odoo12ce
> > uid = odoo12ce
> > gid = odoo12ce
> >
> > # To test and verification
> > callable = application
> > # To test and verification
> > #module = odoo.service.wsgi_server:application
> >
> > # enable uwsgi master process
> > master = true
> > lazy = true
> > lazy-apps=true
> >
> > # turn on memory usage report
> > #memory-report=true
> >
> > enable-threads = true
> > threads = 2
> > thunder-lock = true
> > so-keepalive = true
> >
> > buffer-size = 262144
> > http-buffer-size = 262144
> >
> > response-headers-limit = 262144
> > http-headers-timeout = 900
> > # set max connections to 1024 in uWSGI
> > listen = 1024
> >
> > so-send-timeout = 900
> > socket-send-timeout = 900
> > so-write-timeout = 900
> > socket-write-timeout = 900
> >
> > http-timeout = 900
> > socket-timeout = 900
> >
> > wsgi-accept-buffer = true
> > wsgi-accept-buffers = true
> > # clear environment on exit and Delete sockets during shutdown
> > vacuum = true
> > single-interpreter = true
> >
> > # Shutdown when receiving SIGTERM (default is respawn)
> > die-on-term = true
> > need-app = true
> >
> > # Disable built-in logging
> > disable-logging = false
> >
> > # but log 4xx's and 5xx's anyway
> > log-4xx = true
> > log-5xx = true
> >
> > # full path to Odoo12ce project's root directory
> > chdir = /odoo_erp/odoo12ce/odoo12ce_server
> > #chdir2 = = /odoo_erp/odoo12ce/odoo12ce_server
> >
> > pythonpath = /odoo_erp/odoo12ce/odoo12ce_server
> >
> > # odoo12ce's wsgi file
> > wsgi-file =
> /odoo_erp/odoo12ce/odoo12ce_server/setup/odoo12ce-uwsgi.py
> >
> > #emperor = /odoo_erp/odoo12ce/vassals
> >
> > uwsgi-socket = 127.0.0.1:8070
> > uwsgi-socket = 127.0.0.1:8170
> >
> > # daemonize uwsgi and write messages into given log
> > daemonize = /var/log/odoo_erp/odoo12ce/odoo12ce_uwsgi_emperor.log
> >
> > # Restart workers after this many requests
> > max-requests = 2000
> >
> > # Restart workers after this many seconds
> > max-worker-lifetime = 3600
> >
> > # Restart workers after this much resident memory
> > reload-on-rss = 2048
> >
> > # How long to wait before forcefully killing workers
> > worker-reload-mercy = 90
> >
> > # Maximum number of workers allowed (cpu * 2)
> > processes = 8
> >
> > `
> >
> > Posted at Nginx Forum:
> > https://forum.nginx.org/read.php?2,291675,291675#msg-291675
> >
> >
> >
> > ------------------------------
> >
> > Message: 3
> > Date: Thu, 27 May 2021 22:26:29 +0300
> > From: "Valentin V. Bartenev" <vbart at nginx.com>
> > To: nginx at nginx.org
> > Subject: Unit 1.24.0 release
> > Message-ID: <4650105.31r3eYUQgx at vbart-laptop>
> > Content-Type: text/plain; charset="UTF-8"
> >
> > Hi,
> >
> > I'm glad to announce a new release of NGINX Unit.
> >
> > This one is full of shiny new features. But before I dive into the
> > details,
> > let me introduce our new developers without whom this release wouldn't be
> > so
> > feature-rich. Please, welcome Zhidao Hong (???) and Ois?n Canty.
> >
> > Zhidao has already been contributing to various nginx open-source
> projects
> > for
> > years as a community member, and I'm very excited to finally have him on
> > board.
> >
> > Ois?n is a university student who's very interested in Unit; he joined
> our
> > dev
> > team as an intern and already shown solid coding skills, curiosity, and
> > attention to details, which is so important to our project. Good job!
> >
> >
> > Now, back to the features. I'd like to highlight the first of our
> > improvements
> > in serving static media assets.
> >
> > :: MIME Type Filtering ::
> >
> > Now, you can restrict file serving by MIME type:
> >
> > {
> > "share": "/www/data",
> > "types": [ "image/*", "video/*" ]
> > }
> >
> > The configuration above allows only files with various video and image
> > extensions, but all other requests will return status code 403.
> >
> > In particular, this goes well with the "fallback" option that performs
> > another
> > action if the "share" returns a 40x error:
> >
> > {
> > "share": "/www/data",
> > "types": [ "!application/x-httpd-php" ],
> >
> > "fallback": {
> > "pass": "applications/php"
> > }
> > }
> >
> > Here, all requests to existing files other than ".php" will be served as
> > static
> > content while the rest will be passed to a PHP application.
> >
> > More examples and documentation snippets are available here:
> >
> > - https://unit.nginx.org/configuration/#mime-filtering
> >
> >
> > :: Chrooting and Path Restrictions When Serving Files ::
> >
> > As we take security seriously, now Unit introduces the ability to chroot
> > not only its application processes but also the static files it serves on
> > a per-request basis. Additionally, you can restrict traversal of
> mounting
> > points and symbolic link resolution:
> >
> > {
> > "share": "/www/data/static/",
> > "chroot": "/www/data/",
> > "follow_symlinks": false,
> > "traverse_mounts": false
> > }
> >
> > See here for more information:
> >
> > - https://unit.nginx.org/configuration/#path-restrictions
> >
> > For details of Unit application process isolation abilities:
> >
> > - https://unit.nginx.org/configuration/#process-isolation
> >
> >
> > Other notable features unrelated to static file serving:
> >
> > * Multiple WSGI/ASGI Python entry points per process
> >
> > It allows loading multiple modules or app entry points into a single
> > Python
> > process, choosing between them when handling requests with the full
> > power of
> > Unit's routes system.
> >
> > See here for Python's "targets" object description:
> >
> > - https://unit.nginx.org/configuration/#configuration-python-targets
> >
> > And here, more info about Unit's internal routing:
> >
> > - https://unit.nginx.org/configuration/#routes
> >
> >
> > * Automatic overloading of "http" and "websocket" modules in Node.js
> >
> > Now you can run Node.js apps on Unit without touching their sources:
> >
> > - https://unit.nginx.org/configuration/#node-js
> >
> >
> > * Applying OpenSSL configuration commands
> >
> > Finally, you can control various TLS settings via OpenSSL's generic
> > configuration interface with all the dynamic power of Unit:
> >
> > - https://unit.nginx.org/configuration/#ssl-tls-configuration
> >
> >
> > The full changelog for the release:
> >
> > Changes with Unit 1.24.0 27 May
> > 2021
> >
> > *) Change: PHP added to the default MIME type list.
> >
> > *) Feature: arbitrary configuration of TLS connections via OpenSSL
> > commands.
> >
> > *) Feature: the ability to limit static file serving by MIME types.
> >
> > *) Feature: support for chrooting, rejecting symlinks, and rejecting
> > mount point traversal on a per-request basis when serving static
> > files.
> >
> > *) Feature: a loader for automatically overriding the "http" and
> > "websocket" modules in Node.js.
> >
> > *) Feature: multiple "targets" in Python applications.
> >
> > *) Feature: compatibility with Ruby 3.0.
> >
> > *) Bugfix: the router process could crash while closing a TLS
> > connection.
> >
> > *) Bugfix: a segmentation fault might have occurred in the PHP module
> > if
> > fastcgi_finish_request() was used with the "auto_globals_jit"
> option
> > enabled.
> >
> >
> > That's all for today, but even more exciting features are poised for the
> > upcoming releases:
> >
> > - statistics API
> > - process control API
> > - variables from regexp captures in the "match" object
> > - simple request rewrites using variables
> > - variables support in static file serving options
> > - ability to override client IP from the X-Forwarded-For header
> > - TLS sessions cache and tickets
> >
> > Also, please check our GitHub to follow the development and discuss new
> > features:
> >
> > - https://github.com/nginx/unit
> >
> > Stay tuned!
> >
> > wbr, Valentin V. Bartenev
> >
> >
> >
> >
> >
> > ------------------------------
> >
> > Message: 4
> > Date: Fri, 28 May 2021 02:53:53 +0300
> > From: Maxim Dounin <mdounin at mdounin.ru>
> > To: nginx at nginx.org
> > Subject: Re: How to do a large buffer size > 64k uWSGI requests with
> > Nginx proxy | uwsgi request is too big with nginx
> > Message-ID: <YLAxEUEUyrvdUlQr at mdounin.ru>
> > Content-Type: text/plain; charset=us-ascii
> >
> > Hello!
> >
> > On Thu, May 27, 2021 at 02:55:24PM -0400, Rai Mohammed wrote:
> >
> > > How to do a large buffer size > 64k uWSGI requests with Nginx proxy
> > >
> > > Deployment stack :
> > > Odoo ERP 12
> > > Python 3.7.10 and Werkzeug 0.16.1 as backend
> > > Nginx proxy : 1.20.0
> > > uWSGI : 2.0.19.1
> > > OS : FreeBSD 13.0-RELEASE
> > >
> > > Nginx throw an alert from uwsgi of request is too big
> > > Alert : uwsgi request is too big: 81492, client: 10.29.79.250, server:
> > > odoo12ce-erp, request: "GET /web/webclient/..........."
> > >
> > > As you can see I increased the "uwsgi_buffer_size " in both uwsgi.ini
> and
> > > nginx.conf.
> >
> > The uwsgi protocol uses 16-bit datasize field[1], and this limits
> > maximum size of all headers in a request to uwsgi backends. The
> > error message from nginx suggests you are hitting this limit.
> > Unfortunately, using larger buffers won't help here.
> >
> > In most cases such a huge request headers indicate that there is a
> > bug somewhere. For example, nginx by default limits total size of
> > request headers to 32k (see [2]). Similar 64k limit also exists
> > in FastCGI (though with protocol clearly defining how to provide
> > additional data if needed, just not implemented in nginx), and the
> > only case when it was questioned was due to a miscoded client (see
> > [3]).
> >
> > If nevertheless such a huge request headers are intentional, the
> > most simple solution probably would be to switch to a different
> > protocol, such as HTTP.
> >
> > [1] https://uwsgi-docs.readthedocs.io/en/latest/Protocol.html
> > [2] http://nginx.org/r/large_client_header_buffers
> > [3] https://trac.nginx.org/nginx/ticket/239
> >
> > --
> > Maxim Dounin
> > http://mdounin.ru/
> >
> >
> > ------------------------------
> >
> > Message: 5
> > Date: Thu, 27 May 2021 21:59:13 -0400
> > From: "Rai Mohammed" <nginx-forum at forum.nginx.org>
> > To: nginx at nginx.org
> > Subject: Re: How to do a large buffer size > 64k uWSGI requests with
> > Nginx proxy | uwsgi request is too big with nginx
> > Message-ID:
> > <
> > 06b5de642b1389c66bdbe8e193824603.NginxMailingListEnglish at forum.nginx.org
> >
> >
> > Content-Type: text/plain; charset=UTF-8
> >
> > Hello,
> > Yes I have searched the request that generates this big size header, and
> > it's a Get URI
> > pulling all the features installed and requested by the user of the ERP.
> >
> > Before I integrate the uWSGI layer, the stack deployment with Nginx KTLS
> > HTTP2 works perfectly and there's
> > no problem of buffer sizing.
> > The reason why I added the uWSGI layer, is for using the uwsgi-socket
> > binary
> > protocol, and it work fine and very fast
> > decreasing the time load for the principal web page.
> > So for now I have to switch to the http-socket protocol and configuring
> > HTTP2 in uWSGI.
> > I hope in the future Nginx will allow using the huge headers sizing.
> >
> > Thanks for your reply and clarifications.
> >
> > Posted at Nginx Forum:
> > https://forum.nginx.org/read.php?2,291675,291680#msg-291680
> >
> >
> >
> > ------------------------------
> >
> > Message: 6
> > Date: Fri, 28 May 2021 09:00:00 +0100
> > From: Francis Daly <francis at daoine.org>
> > To: nginx at nginx.org
> > Subject: Re: Help: Using Nginx Reverse Proxy bypass traffic in to a
> > application running in a container
> > Message-ID: <20210528080000.GC11167 at daoine.org>
> > Content-Type: text/plain; charset=us-ascii
> >
> > On Tue, May 25, 2021 at 09:47:47PM +0530, Amila Gunathilaka wrote:
> >
> > Hi there,
> >
> > > I'm sorry for taking time to reply to this, you were so keen about my
> > > problem. Thank you.
> >
> > No worries at all -- the mailing list is not an immediate-response
> medium.
> >
> > > Actually my problem was when sending *response *to the load balancer
> from
> > > the nginx ( not the request, it should be corrected as the *response
> *in
> > my
> > > previous email).
> > > Such as my external load balancer is always doing a health check for my
> > > nginx port (80) , below is the *response *message in the
> > > /var/log/nginx/access.log against the health check request coming
> from
> > > the external-loadbalancer.
> >
> > As I understand it, the load balancer is making the request "OPTIONS /"
> > to nginx, and nginx is responding with a http 405, and you don't want
> > nginx to do that.
> >
> > What response do you want nginx to give to the request?
> >
> > Your config make it look like nginx is told to proxy_pass the OPTIONS
> > request to your port 9091 server, so I presume that your port 9091 server
> > is responding 405 to the OPTIONS request and nginx is passing the
> response
> > from the 9091-upstream to the load-balancer client.
> >
> > Your port 9091 logs or traffic analysis should show that that is the
> case.
> >
> > If is the case, you *could* fix it by telling your 9091-upstream to
> respond
> > how you want it to to the "OPTIONS /" request (using its config); or
> > you could configure nginx to intercept the request and handle it itself,
> > without proxy_pass'ing it
> >
> >
> > The first case would mean that the "health check" is actually testing
> > the full nginx-to-upstream chain; the second would have it only testing
> > that nginx is responding.
> >
> > If you decide that you want nginx to handle this request itself, and to
> > respond with a http 204, you could add something like
> >
> > if ($request_method = "OPTIONS") { return 204; }
> >
> > inside the "location /" block.
> >
> > (Strictly: that would tell nginx to handle all "OPTIONS /anything"
> > requests, not just "OPTIONS /".)
> >
> > You would not need the error_page directives that you show.
> >
> >
> > You could instead add a new "location = /" block, and do the OPTIONS
> > check there; but you would probably also have to duplicate the three
> > other lines from the "location /" block -- sometimes people prefer
> > "tidy-looking" configuration over "correctness and probable machine
> > efficiency". Pick which you like; if you do not measure a difference,
> > there is not a difference that you care about.
> >
> > That is, you want either one location:
> >
> > > server {
> > > listen 80;
> > > server_name 172.25.234.105;
> >
> > > location / {
> >
> > if ($request_method = "OPTIONS") { return 204; }
> >
> > > proxy_pass http://127.0.0.1:9091;
> > > auth_basic "PROMETHEUS PUSHGATEWAY Login Area";
> > > auth_basic_user_file /etc/nginx/.htpasswd;
> > > }
> > > }
> >
> > or two locations:
> >
> > location = / {
> > if ($request_method = "OPTIONS") { return 204; }
> > proxy_pass http://127.0.0.1:9091;
> > auth_basic "PROMETHEUS PUSHGATEWAY Login Area";
> > auth_basic_user_file /etc/nginx/.htpasswd;
> > }
> >
> > location / {
> > proxy_pass http://127.0.0.1:9091;
> > auth_basic "PROMETHEUS PUSHGATEWAY Login Area";
> > auth_basic_user_file /etc/nginx/.htpasswd;
> > }
> >
> > (and, if you use the two, you could potentially move the "auth_basic"
> > and "auth_basic_user_file" outside the "location", to be directly within
> > "server"; that does depend on what else is in your config file.)
> >
> > If you want something else in the response to the OPTIONS request,
> > you can change the "return" response code, or "add_header" and the like.
> >
> > Good luck with it,
> >
> > f
> > --
> > Francis Daly francis at daoine.org
> >
> >
> > ------------------------------
> >
> > Subject: Digest Footer
> >
> > _______________________________________________
> > nginx mailing list
> > nginx at nginx.org
> > http://mailman.nginx.org/mailman/listinfo/nginx
> >
> > ------------------------------
> >
> > End of nginx Digest, Vol 139, Issue 28
> > **************************************
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> http://mailman.nginx.org/pipermail/nginx/attachments/20210529/398430ff/attachment.htm
> >
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
> ------------------------------
>
> End of nginx Digest, Vol 139, Issue 29
> **************************************
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20210529/c7796d1d/attachment-0001.htm>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 138237 bytes
Desc: not available
URL: <http://mailman.nginx.org/pipermail/nginx/attachments/20210529/c7796d1d/attachment-0001.png>
More information about the nginx
mailing list