From osa at freebsd.org.ru Tue Jan 7 00:14:36 2025 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Tue, 7 Jan 2025 03:14:36 +0300 Subject: nginx and python script In-Reply-To: References: Message-ID: Hi, On Fri, Dec 27, 2024 at 10:15:15PM +0100, Ralf Figge via nginx wrote: [...] > > i am a newbee from nginx. I need to run a python script via cgi-bin. That isn't nginx business to run cgi-bin scripts, but nginx can proxy a request to an application server behind. For the application server role you may want to take a look on NGINX Unit, https://unit.nginx.org/, the universal "polyglot" dynamic application server, that can "natively" run python, https://unit.nginx.org/configuration/#python, and other applications. Thank you. -- Sergey A. Osokin From xeioex at nginx.com Tue Jan 14 22:42:40 2025 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 14 Jan 2025 14:42:40 -0800 Subject: njs-0.8.9 Message-ID: <431b8dce-757c-44c5-92c9-5ab62987ea86@nginx.com> Hello, I'm glad to announce a new release of NGINX JavaScript module (njs). This release introduced file system module for QuickJS engine. Learn more about njs: - Overview and introduction:       https://nginx.org/en/docs/njs/ - NGINX JavaScript in Your Web Server Configuration:       https://youtu.be/Jc_L6UffFOs - Extending NGINX with Custom Code:       https://youtu.be/0CVhq4AUU7M - Using node modules with njs:       https://nginx.org/en/docs/njs/node_modules.html - Writing njs code using TypeScript definition files:       https://nginx.org/en/docs/njs/typescript.html Feel free to try it and give us feedback on: - Github:       https://github.com/nginx/njs/issues Additional examples and howtos can be found here: - Github:       https://github.com/nginx/njs-examples Changes with njs 0.8.9                                       14 Jan 2025     nginx modules:     *) Bugfix: removed extra VM creation per server.        Previously, when js_import was declared in http or stream blocks,        an extra copy of the VM instance was created for each server        block. This was not needed and consumed a lot of memory for        configurations with many server blocks.       This issue was introduced in 9b674412 (0.8.6) and was partially       fixed for location blocks only in 685b64f0 (0.8.7).     Core:     *) Feature: added fs module for QuickJS engine. From rejaine at bhz.jamef.com.br Fri Jan 24 13:47:56 2025 From: rejaine at bhz.jamef.com.br (Rejaine Da Silveira Monteiro) Date: Fri, 24 Jan 2025 10:47:56 -0300 Subject: nginx with gitlab self host and cookie/session expired problems Message-ID: Hi, I am using Nginx as a reverse proxy for my self-hosted GitLab instance. When accessing GitLab through this proxy, I frequently experience logouts with a "session expired" error, even during active sessions. The GitLab URL is configured as gitlab.mydomain.com. However, when logging out of GitLab, all other company services using *.mydomain.com are also disconnected, even for sites that do not share the same certificate (GitLab uses a wildcard certificate) or those without a certificate at all. After some research, I discovered that GitLab appears to delete all cookies for the domain during logout. There even seems to be a fix for this issue: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/142740 Now, I’m wondering if the frequent logouts (session expiration) might be related to this cookie issue, and if there are any suggestions for addressing it via Nginx. Thanks -- *Esta mensagem pode conter informações confidenciais ou privilegiadas, sendo seu sigilo protegido por lei. Se você não for o destinatário ou a pessoa autorizada a receber esta mensagem, não pode usar, copiar ou divulgar as informações nela contidas ou tomar qualquer ação baseada nessas informações. Se você recebeu esta mensagem por engano, por favor avise imediatamente ao remetente, respondendo o e-mail e em seguida apague-o. Agradecemos sua cooperação.* From furregtt at gmail.com Fri Jan 24 16:14:44 2025 From: furregtt at gmail.com (Amv_Nuga) Date: Fri, 24 Jan 2025 17:14:44 +0100 Subject: help with nginx Message-ID: I have a website : https://www.solidkingsinc.com I have the free version of Cloudflare, i use reverseproxy etc My issue is, I have my nginx configured perfectly works fine and everything. But I am trying to bruteforce/overload/ddos my own server and it crashes because of a get request overload. I use pythons fetching method and could use whatever there exist probably but as soon as i put a fetch while loop with 10000 fetches or not even 10000, maybe 1k.It never once gives a 429 or 403 or something. I have spent soo many hours trying to find a way to use rate limiting with nginx but failed miserably. I dont see any effect. It still gives 200 status code and my server overloads it doesnt stop it. Cloudflare used to stop it and give 403, but they probably removed that too from the free tier. Here is my current configuration: EVERYTHING WORKS AS INTENDED EXCEPT ---------------------- worker_processes 1; events { worker_connections 4096; } http { limit_req_zone $binary_remote_addr zone=mylimit:10m rate=2r/s; include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; log_format combined_with_limit '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for" ' 'limit_status=$limit_req_status'; access_log logs/access.log combined_with_limit; #backend server { listen 443 ssl; server_name api.solidkingsinc.com; ssl_certificate C:/Windows/System32/drivers/etc/apissl/certificate.pem; ssl_certificate_key C:/Windows/System32/drivers/etc/apissl/private.key; ssl_session_cache shared:SSL:1m; ssl_session_timeout 5m; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; location / { limit_req zone=mylimit burst=10 delay=5; proxy_pass http://localhost:1337; proxy_http_version 1.1; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "Upgrade"; proxy_pass_request_headers on; } } # Frontend server { listen 443 ssl; server_name solidkingsinc.com www.solidkingsinc.com; ssl_certificate C:/Windows/System32/drivers/etc/ssl/certificate.pem; ssl_certificate_key C:/Windows/System32/drivers/etc/ssl/private.key; ssl_session_cache shared:SSL:1m; ssl_session_timeout 5m; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; if ($host = solidkingsinc.com) { return 301 https://www.solidkingsinc.com$request_uri; } location / { proxy_pass http://localhost:5174; proxy_http_version 1.1; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "Upgrade"; proxy_pass_request_headers on; } } server { listen 80; server_name api.solidkingsinc.com solidkingsinc.com www.solidkingsinc.com; location / { return 301 https://$host$request_uri; } } server { listen 80 default_server; listen [::]:80 default_server; server_name _; return 444; } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Sat Jan 25 12:36:50 2025 From: r at roze.lv (Reinis Rozitis) Date: Sat, 25 Jan 2025 14:36:50 +0200 Subject: help with nginx In-Reply-To: References: Message-ID: <002301db6f25$cee615a0$6cb240e0$@roze.lv> > I have a website : https://www.solidkingsinc.com > I have spent soo many hours trying to find a way to use rate limiting with nginx but failed miserably. According to your provided configuration you have enabled the limit requesting only for api.solidkingsinc.com and not for the www. website. You need to add limit_req also for that server/location {} block. rr From r at roze.lv Sat Jan 25 12:45:31 2025 From: r at roze.lv (Reinis Rozitis) Date: Sat, 25 Jan 2025 14:45:31 +0200 Subject: nginx with gitlab self host and cookie/session expired problems In-Reply-To: References: Message-ID: <002401db6f27$04d6b830$0e842890$@roze.lv> > Now, I’m wondering if the frequent logouts (session expiration) might be related to this cookie issue, and if there are any suggestions for addressing it via Nginx. Nginx can't do much about it If the application behind deletes all the cookies. So check what version of Gitlab you are running as the fix has been merged ~only 5 month ago https://gitlab.com/gitlab-org/gitlab/-/merge_requests/156213 rr From rejaine at bhz.jamef.com.br Tue Jan 28 11:48:49 2025 From: rejaine at bhz.jamef.com.br (Rejaine Da Silveira Monteiro) Date: Tue, 28 Jan 2025 08:48:49 -0300 Subject: nginx with gitlab self host and cookie/session expired problems In-Reply-To: <002401db6f27$04d6b830$0e842890$@roze.lv> References: <002401db6f27$04d6b830$0e842890$@roze.lv> Message-ID: "Nginx can't do much about it If the application behind deletes all the cookies." Yes, I am fully aware that this issue needs to be resolved at the application level, and we should apply the merge request as soon as possible. My final question about Nginx is more focused on the issue of session expiration when accessing the application through Nginx. This problem doesn't occur when accessing the application directly. Even when using GitLab in isolation (without opening other sites on the same domain), the session still expires after a very short time when accessed via Nginx. I’ve tried several timeout configurations in Nginx, but none of them have resolved the issue. Currently, the configuration looks like this, but it still hasn’t worked: proxy_connect_timeout 600; proxy_send_timeout 600; proxy_read_timeout 600; send_timeout 600; client_header_timeout 600; client_body_timeout 600; I’m still conducting additional tests to determine if the cookie issue is also contributing to the problem of sessions expiring so frequently. On Sat, Jan 25, 2025 at 9:45 AM Reinis Rozitis via nginx wrote: > > Now, I’m wondering if the frequent logouts (session expiration) might be > related to this cookie issue, and if there are any suggestions for > addressing it via Nginx. > > Nginx can't do much about it If the application behind deletes all the > cookies. > So check what version of Gitlab you are running as the fix has been merged > ~only 5 month ago > https://gitlab.com/gitlab-org/gitlab/-/merge_requests/156213 > > rr > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx > -- *Esta mensagem pode conter informações confidenciais ou privilegiadas, sendo seu sigilo protegido por lei. Se você não for o destinatário ou a pessoa autorizada a receber esta mensagem, não pode usar, copiar ou divulgar as informações nela contidas ou tomar qualquer ação baseada nessas informações. Se você recebeu esta mensagem por engano, por favor avise imediatamente ao remetente, respondendo o e-mail e em seguida apague-o. Agradecemos sua cooperação.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From clima.gabrielphoto at gmail.com Thu Jan 30 09:01:32 2025 From: clima.gabrielphoto at gmail.com (Clima Gabriel) Date: Thu, 30 Jan 2025 11:01:32 +0200 Subject: Nginx ignores proxy_no_cache In-Reply-To: References: <87sezxpfpm.wl-kirill@korins.ky> Message-ID: Hello Maxim, Hope this helps. We encounter disk failures fairly often and what the kernel will do most of the time is re-mount the disk as read-only. What I did was add this check which checks if the disk is healthy before hand and executes the proxy_no_cache path if it is. It doesn't cover all the possible disk failures, like sometimes you'll just get IO errors and still return 5XX to clients. But to cover all cases a much cleverer reshuffling of the code is needed. diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c index d7f427d50..839ed6c0d 100644 --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -8,6 +8,7 @@ #include #include #include +#include #include @@ -3424,6 +3425,16 @@ ngx_http_upstream_send_response(ngx_http_request_t *r, ngx_http_upstream_t *u) break; default: /* NGX_OK */ + if (r->cache) { + struct statvfs fs; + if (statvfs((char *)r->cache->file_cache->path->name.data, &fs) == -1) { + return NGX_ERROR; + } + if ((fs.f_flag & ST_RDONLY) != 0) { + u->cacheable = 0; + break; + } + } if (u->cache_status == NGX_HTTP_CACHE_BYPASS) { On Sun, Apr 7, 2024 at 4:56 PM Maxim Dounin wrote: > Hello! > > On Sun, Apr 07, 2024 at 01:36:21PM +0200, Kirill A. Korinsky wrote: > > > Greetings, > > > > Let assume that I would like behavior on LB from the backend and force > it to > > cache only resposnes that have a X-No-Cache header with value NO. > > > > Nginx should cache a response with any code, if it has such headers. > > > > This works well until the backend is unavailable and nginx returns a > > hardcoded 502 that doesn't have a control header, but such a response is > > cached anyway. > > > > Here is the config that allows to reproduce the issue: > > > > http { > > default_type application/octet-stream; > > > > proxy_cache_path /tmp/nginx_cache keys_zone=the_zone:1m; > > proxy_cache the_zone; > > proxy_cache_valid any 15m; > > proxy_cache_methods GET HEAD POST; > > > > add_header X-Cache-Status $upstream_cache_status > always; > > > > map $upstream_http_x_no_cache $no_cache { > > default 1; > > "NO" 0; > > } > > > > proxy_no_cache $no_cache; > > > > upstream echo { > > server 127.127.127.127:80; > > } > > > > server { > > listen 1234; > > server_name localhost; > > > > location / { > > proxy_pass http://echo; > > } > > } > > } > > > > when I run: > > > > curl -D - http://127.0.0.1:1234/ > > > > it returns MISS on the first request, and HIT on the second one. > > > > Here I expect both requests to return MISS. > > Thanks for the report. > > Indeed, proxy_no_cache is only checked for proper upstream > responses, but not when caching errors, including internally > generated 502/504 in ngx_http_upstream_finalize_request(), and > intercepted errors in ngx_http_upstream_intercept_errors(). > > Quick look suggests there will be also issues with caching errors > after proxy_cache_bypass (errors won't be cached even if they > should), as well as issues with proxy_cache_max_range_offset after > proxy_cache_bypass (it will be ignored). > > This needs cleanup / fixes, added to my TODO list. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Jan 31 09:36:03 2025 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 31 Jan 2025 12:36:03 +0300 Subject: Nginx ignores proxy_no_cache In-Reply-To: References: <87sezxpfpm.wl-kirill@korins.ky> Message-ID: Hello! On Thu, Jan 30, 2025 at 11:01:32AM +0200, Clima Gabriel wrote: > Hello Maxim, > Hope this helps. > We encounter disk failures fairly often and what the kernel will do most of > the time is re-mount the disk as read-only. > What I did was add this check which checks if the disk is healthy before > hand and executes the proxy_no_cache path if it is. > It doesn't cover all the possible disk failures, like sometimes you'll just > get IO errors and still return 5XX to clients. But to cover all cases a > much cleverer reshuffling of the code is needed. > > > diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c > index d7f427d50..839ed6c0d 100644 > --- a/src/http/ngx_http_upstream.c > +++ b/src/http/ngx_http_upstream.c > @@ -8,6 +8,7 @@ > #include > #include > #include > +#include > > #include > > @@ -3424,6 +3425,16 @@ ngx_http_upstream_send_response(ngx_http_request_t > *r, ngx_http_upstream_t *u) > break; > > default: /* NGX_OK */ > + if (r->cache) { > + struct statvfs fs; > + if (statvfs((char *)r->cache->file_cache->path->name.data, > &fs) == -1) { > + return NGX_ERROR; > + } > + if ((fs.f_flag & ST_RDONLY) != 0) { > + u->cacheable = 0; > + break; > + } > + } > > if (u->cache_status == NGX_HTTP_CACHE_BYPASS) { I cannot say I like this approach. Rather, I would recommend adding an explicit test via the proxy_no_cache configuration directive if that's an expected state in your setup. Right now this certainly can be done with the embedded Perl module, but we can consider extending "if" with "-r" and "-w" tests to make things easier. Alternatively, we can consider handling (at least some) cache access failures gracefully and ensure that we'll fall back to normal proxying if this happens. This wasn't done previously to keep the code simple, but can be reconsidered if there is an understanding that this is quite important in some setups and there is a simple enough way to handle such failures. Please also note that right now even proxying won't work if a temporary file is needed and cannot be created, as file access failures are considered fatal. OTOH, my personal preference is to keep disks mirrored, this ensures that a single disk failure won't affect server operations and provides better performance as a bonus. (Note well that the code in question was modified in freenginx to address the issue reported by Kirill in the thread you are replying to, and your patch won't apply, see https://freenginx.org/hg/nginx/rev/c5623963c29e for details.) [...] -- Maxim Dounin http://mdounin.ru/ From clima.gabrielphoto at gmail.com Fri Jan 31 10:54:32 2025 From: clima.gabrielphoto at gmail.com (Clima Gabriel) Date: Fri, 31 Jan 2025 12:54:32 +0200 Subject: Nginx ignores proxy_no_cache In-Reply-To: References: <87sezxpfpm.wl-kirill@korins.ky> Message-ID: > my personal preference is to keep disks mirrored Certainly, this is ideal, however Nginx is used by many businesses (cdn, content providers, etc) whose margins are often too low to justify hardware redundancy. And using cheap SSDs can have incredibly bad impact on performance, as we've encountered certain ssds (870 QVO f.e.) that, at the extremes, took (literally) seconds to return from open() / read() / write() can be reconsidered if there is an understanding that this is quite > important in some setups and there is a simple enough way to handle such > failures We'll need to implement this regardless so I'll, fingers crossed, be back with updates. Following are notes on my initial idea which proved too problematic to pursue further, so feel free to skip. Create the temp file ahead of time, before the proxy_no_cache check, and take the proxy_no_cache code path if we fail to create the file. Clearly not very elegant either as: The temp file will need to be removed in the else of "if (p->cacheable) ..." The temp file may be created successfully yet you may still encounter an IO error a few nanoseconds later when you try to write to it. There were several variants of this (diff file attached) which either leaked tempfiles or broke proxy_no_cache. On Fri, Jan 31, 2025 at 11:36 AM Maxim Dounin wrote: > Hello! > > On Thu, Jan 30, 2025 at 11:01:32AM +0200, Clima Gabriel wrote: > > > Hello Maxim, > > Hope this helps. > > We encounter disk failures fairly often and what the kernel will do most > of > > the time is re-mount the disk as read-only. > > What I did was add this check which checks if the disk is healthy before > > hand and executes the proxy_no_cache path if it is. > > It doesn't cover all the possible disk failures, like sometimes you'll > just > > get IO errors and still return 5XX to clients. But to cover all cases a > > much cleverer reshuffling of the code is needed. > > > > > > diff --git a/src/http/ngx_http_upstream.c b/src/http/ngx_http_upstream.c > > index d7f427d50..839ed6c0d 100644 > > --- a/src/http/ngx_http_upstream.c > > +++ b/src/http/ngx_http_upstream.c > > @@ -8,6 +8,7 @@ > > #include > > #include > > #include > > +#include > > > > #include > > > > @@ -3424,6 +3425,16 @@ ngx_http_upstream_send_response(ngx_http_request_t > > *r, ngx_http_upstream_t *u) > > break; > > > > default: /* NGX_OK */ > > + if (r->cache) { > > + struct statvfs fs; > > + if (statvfs((char *)r->cache->file_cache->path->name.data, > > &fs) == -1) { > > + return NGX_ERROR; > > + } > > + if ((fs.f_flag & ST_RDONLY) != 0) { > > + u->cacheable = 0; > > + break; > > + } > > + } > > > > if (u->cache_status == NGX_HTTP_CACHE_BYPASS) { > > I cannot say I like this approach. > > Rather, I would recommend adding an explicit test via the > proxy_no_cache configuration directive if that's an expected state > in your setup. Right now this certainly can be done with the > embedded Perl module, but we can consider extending "if" with "-r" > and "-w" tests to make things easier. > > Alternatively, we can consider handling (at least some) cache > access failures gracefully and ensure that we'll fall back to > normal proxying if this happens. This wasn't done previously to > keep the code simple, but can be reconsidered if there is an > understanding that this is quite important in some setups and > there is a simple enough way to handle such failures. > > Please also note that right now even proxying won't work if > a temporary file is needed and cannot be created, as file access > failures are considered fatal. > > OTOH, my personal preference is to keep disks mirrored, this > ensures that a single disk failure won't affect server operations > and provides better performance as a bonus. > > (Note well that the code in question was modified in freenginx to > address the issue reported by Kirill in the thread you are > replying to, and your patch won't apply, see > https://freenginx.org/hg/nginx/rev/c5623963c29e for details.) > > [...] > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ideas_MISS_if_write_tempfile_failed.diff Type: text/x-patch Size: 3848 bytes Desc: not available URL: