From kmm at freemail.hu Wed Apr 3 14:54:21 2024 From: kmm at freemail.hu (kmm at freemail.hu) Date: Wed, 3 Apr 2024 14:54:21 +0000 (GMT) Subject: Clever dog cylan software update help needed Message-ID: Hi all, I was sent to here for support from Clever dog page on Google playhttps://play.google.com/store/apps/details?id=com.cylan.jiafeigouI had to be installed the new version, after that I cannot enter, the button agree and continue is grey, inactive, inky the log out works.Who can help me? I would like to use my 2 wifi  baby monitors.Thanks a lot in advance, kmm -------------- next part -------------- An HTML attachment was scrubbed... URL: From kirill at korins.ky Sun Apr 7 11:36:21 2024 From: kirill at korins.ky (Kirill A. Korinsky) Date: Sun, 07 Apr 2024 13:36:21 +0200 Subject: Nginx ignores proxy_no_cache Message-ID: <87sezxpfpm.wl-kirill@korins.ky> Greetings, Let assume that I would like behavior on LB from the backend and force it to cache only resposnes that have a X-No-Cache header with value NO. Nginx should cache a response with any code, if it has such headers. This works well until the backend is unavailable and nginx returns a hardcoded 502 that doesn't have a control header, but such a response is cached anyway. Here is the config that allows to reproduce the issue: http { default_type application/octet-stream; proxy_cache_path /tmp/nginx_cache keys_zone=the_zone:1m; proxy_cache the_zone; proxy_cache_valid any 15m; proxy_cache_methods GET HEAD POST; add_header X-Cache-Status $upstream_cache_status always; map $upstream_http_x_no_cache $no_cache { default 1; "NO" 0; } proxy_no_cache $no_cache; upstream echo { server 127.127.127.127:80; } server { listen 1234; server_name localhost; location / { proxy_pass http://echo; } } } when I run: curl -D - http://127.0.0.1:1234/ it returns MISS on the first request, and HIT on the second one. Here I expect both requests to return MISS. -- wbr, Kirill From mdounin at mdounin.ru Sun Apr 7 13:56:20 2024 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 7 Apr 2024 16:56:20 +0300 Subject: Nginx ignores proxy_no_cache In-Reply-To: <87sezxpfpm.wl-kirill@korins.ky> References: <87sezxpfpm.wl-kirill@korins.ky> Message-ID: Hello! On Sun, Apr 07, 2024 at 01:36:21PM +0200, Kirill A. Korinsky wrote: > Greetings, > > Let assume that I would like behavior on LB from the backend and force it to > cache only resposnes that have a X-No-Cache header with value NO. > > Nginx should cache a response with any code, if it has such headers. > > This works well until the backend is unavailable and nginx returns a > hardcoded 502 that doesn't have a control header, but such a response is > cached anyway. > > Here is the config that allows to reproduce the issue: > > http { > default_type application/octet-stream; > > proxy_cache_path /tmp/nginx_cache keys_zone=the_zone:1m; > proxy_cache the_zone; > proxy_cache_valid any 15m; > proxy_cache_methods GET HEAD POST; > > add_header X-Cache-Status $upstream_cache_status always; > > map $upstream_http_x_no_cache $no_cache { > default 1; > "NO" 0; > } > > proxy_no_cache $no_cache; > > upstream echo { > server 127.127.127.127:80; > } > > server { > listen 1234; > server_name localhost; > > location / { > proxy_pass http://echo; > } > } > } > > when I run: > > curl -D - http://127.0.0.1:1234/ > > it returns MISS on the first request, and HIT on the second one. > > Here I expect both requests to return MISS. Thanks for the report. Indeed, proxy_no_cache is only checked for proper upstream responses, but not when caching errors, including internally generated 502/504 in ngx_http_upstream_finalize_request(), and intercepted errors in ngx_http_upstream_intercept_errors(). Quick look suggests there will be also issues with caching errors after proxy_cache_bypass (errors won't be cached even if they should), as well as issues with proxy_cache_max_range_offset after proxy_cache_bypass (it will be ignored). This needs cleanup / fixes, added to my TODO list. -- Maxim Dounin http://mdounin.ru/ From quickfire28 at gmail.com Tue Apr 16 04:11:32 2024 From: quickfire28 at gmail.com (zen zenitram) Date: Tue, 16 Apr 2024 12:11:32 +0800 Subject: NGINX upload limit In-Reply-To: References: Message-ID: Good day! Here is the NGINX Configuration, we tried everything but up to now the upload max limit is still at 128 kb. */etc/nginx/nginx.conf* worker_processes auto; pid /run/nginx.pid; include /etc/nginx/modules-enabled/*.conf; events { worker_connections 768; # multi_accept on; } http { # Basic Settings sendfile on; tcp_nopush on; types_hash_max_size 2048; client_max_body_size 500M; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # SSL Settings ## ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE ssl_prefer_server_ciphers on; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip_vary on; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } */etc/nginx/site-available/test.edu.ph * server { server_name test.edu.ph; location / { proxy_pass https://192.168.8.243; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; client_max_body_size 500M; } listen 443 ssl; # managed by Certbot ssl_certificate /etc/letsencrypt/live/test.edu.ph/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/test.edu.ph/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot client_max_body_size 500M; } server { if ($host = test.edu.ph) { return 301 https://$host$request_uri; } # managed by Certbot listen 80; server_name test.edu.ph; return 404; # managed by Certbot client_max_body_size 500M; } Can anyone help us solve our max upload size limit. Our server is set to https only access. Thank you! On Fri, Mar 1, 2024 at 11:27 PM Sergey A. Osokin wrote: > Hi there, > > On Fri, Mar 01, 2024 at 04:45:07PM +0800, zen zenitram wrote: > > > > We created an institutional repository with eprints and using NGINX as > load > > balancer, but we encountered problem in uploading file to our repository. > > It only alccepts 128 kb file upload, the client_max_body_size is set to 2 > > gb. > > > > but still it only accepts 128 kb max upload size. > > How to solve this problem? > > I'd recommend to share the nginx configuration file in the maillist. > Don't forget to remove any sensitive information or create a minimal > nginx configuration reproduces the case. > > Thank you. > > -- > Sergey A. Osokin > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From duluxoz at gmail.com Tue Apr 16 06:49:09 2024 From: duluxoz at gmail.com (duluxoz) Date: Tue, 16 Apr 2024 16:49:09 +1000 Subject: Location Directive Not Working - Help Please Message-ID: Hi All, Quick Q: Why does the following config not work ie NginX is returning a 404 when I attempt to access a php file/page from the "/common/" location? Obviously I'm misunderstanding something about how location directives work  :-) ~~~ location /common/ {   root /www;   try_files $uri $uri/ =404; } location ~ \.php$ {   try_files $uri =404;   deny all;   include fastcgi_params;   fastcgi_pass unix:/run/php-fpm/php-fpm.sock;   fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;   fastcgi_intercept_errors on; } location ~ /\. {   access_log off;   log_not_found off;   deny all; } location ~ ~$ {   access_log off;   log_not_found off;   deny all; } ~~~ Thanks in advance Cheers Dulux-Oz From arut at nginx.com Tue Apr 16 16:40:18 2024 From: arut at nginx.com (Roman Arutyunyan) Date: Tue, 16 Apr 2024 20:40:18 +0400 Subject: nginx-1.25.5 Message-ID: <3294B79D-C5E8-4C13-BC36-A21A8521727B@nginx.com> Changes with nginx 1.25.5 16 Apr 2024 *) Feature: virtual servers in the stream module. *) Feature: the ngx_stream_pass_module. *) Feature: the "deferred", "accept_filter", and "setfib" parameters of the "listen" directive in the stream module. *) Feature: cache line size detection for some architectures. Thanks to Piotr Sikora. *) Feature: support for Homebrew on Apple Silicon. Thanks to Piotr Sikora. *) Bugfix: Windows cross-compilation bugfixes and improvements. Thanks to Piotr Sikora. *) Bugfix: unexpected connection closure while using 0-RTT in QUIC. Thanks to Vladimir Khomutov. ---- Roman Arutyunyan arut at nginx.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From xeioex at nginx.com Tue Apr 16 21:22:22 2024 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Tue, 16 Apr 2024 14:22:22 -0700 Subject: njs-0.8.4 Message-ID: <13ef4551-a4df-4336-98f5-9dcefca1bf7e@nginx.com> Hello, I'm glad to announce a new release of NGINX JavaScript module (njs). This release introduced the initial QuickJS engine support in CLI as well as regular bugfixes. Notable new features: - QuickJS in njs CLI: : $ ./configure --cc-opt="-I/path/to/quickjs -L/path/to/quickjs" && make njs : $ ./build/njs -n QuickJS : : >> new Map() : [object Map] Learn more about njs: - Overview and introduction:       https://nginx.org/en/docs/njs/ - NGINX JavaScript in Your Web Server Configuration:       https://youtu.be/Jc_L6UffFOs - Extending NGINX with Custom Code:       https://youtu.be/0CVhq4AUU7M - Using node modules with njs:       https://nginx.org/en/docs/njs/node_modules.html - Writing njs code using TypeScript definition files:       https://nginx.org/en/docs/njs/typescript.html Feel free to try it and give us feedback on: - Github:       https://github.com/nginx/njs/issues - Mailing list:       https://mailman.nginx.org/mailman/listinfo/nginx-devel Additional examples and howtos can be found here: - Github:       https://github.com/nginx/njs-examples Changes with njs 0.8.4                                       16 Apr 2024     nginx modules:     *) Feature: allowing to set Server header for outgoing headers.     *) Improvement: validating URI and args arguments in r.subrequest().     *) Improvement: checking for duplicate js_set variables.     *) Bugfix: fixed clear() method of a shared dictionary without        timeout introduced in 0.8.3.     *) Bugfix: fixed r.send() with Buffer argument.     Core:     *) Feature: added QuickJS engine support in CLI.     *) Bugfix: fixed atob() with non-padded base64 strings. From bmvishwas at gmail.com Wed Apr 17 00:59:56 2024 From: bmvishwas at gmail.com (Vishwas Bm) Date: Wed, 17 Apr 2024 06:29:56 +0530 Subject: Nginx 1.26 Message-ID: Hi, When will nginx 1.26.0 be available ? Any specific timeline for this ? Regards, Vishwas -------------- next part -------------- An HTML attachment was scrubbed... URL: From juef at juef.net Wed Apr 17 12:39:57 2024 From: juef at juef.net (juef) Date: Wed, 17 Apr 2024 15:39:57 +0300 Subject: nginx-1.25.5 In-Reply-To: <3294B79D-C5E8-4C13-BC36-A21A8521727B@nginx.com> References: <3294B79D-C5E8-4C13-BC36-A21A8521727B@nginx.com> Message-ID: Hi, (Tue, 16 Apr 20:40) Roman Arutyunyan: > Changes with nginx 1.25.5 16 Apr 2024 > > *) Feature: virtual servers in the stream module. > > *) Feature: the ngx_stream_pass_module. > > *) Feature: the "deferred", "accept_filter", and "setfib" parameters of > the "listen" directive in the stream module. > > *) Feature: cache line size detection for some architectures. > Thanks to Piotr Sikora. > > *) Feature: support for Homebrew on Apple Silicon. > Thanks to Piotr Sikora. > > *) Bugfix: Windows cross-compilation bugfixes and improvements. > Thanks to Piotr Sikora. > > *) Bugfix: unexpected connection closure while using 0-RTT in QUIC. > Thanks to Vladimir Khomutov. I'm subscribed to Mercurial Atom feed also. There are incorrect links, they contain redundant port definition, and because of that there is an SSL error: packet length too long. i.e. https://hg.nginx.org:80/nginx/rev/8618e4d900cc From r at roze.lv Wed Apr 17 14:32:55 2024 From: r at roze.lv (Reinis Rozitis) Date: Wed, 17 Apr 2024 17:32:55 +0300 Subject: nginx-1.25.5 In-Reply-To: <3294B79D-C5E8-4C13-BC36-A21A8521727B@nginx.com> References: <3294B79D-C5E8-4C13-BC36-A21A8521727B@nginx.com> Message-ID: <000001da90d4$235b8d80$6a12a880$@roze.lv> >    *) Feature: the ngx_stream_pass_module. Hello, what is the difference between pass from ngx_stream_pass_module and proxy_pass from ngx_stream_proxy_module? As in what entails "directly" in "allows passing the accepted connection directly to any configured listening socket"? wbr rr From mayianmm at jmu.edu Wed Apr 17 15:15:01 2024 From: mayianmm at jmu.edu (Mayiani, Martin Martine - mayianmm) Date: Wed, 17 Apr 2024 15:15:01 +0000 Subject: Nginx-1.25.3 Proxy_temp Creating Files root partition Message-ID: Hi, So for some odd reason Nginx creating temp files in the root partition and filling up disk and deleting at a slow rate. Aren't this files supposed to be in /var/cache/nginx/proxy_temp ..... ?? currently they're locate at /etc/nginx/proxy_temp. How do I change that and how do I stop proxy_temp from creating many temp files and filling up my disk? Thanks Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: From arut at nginx.com Wed Apr 17 18:26:38 2024 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 17 Apr 2024 22:26:38 +0400 Subject: nginx-1.25.5 In-Reply-To: <000001da90d4$235b8d80$6a12a880$@roze.lv> References: <3294B79D-C5E8-4C13-BC36-A21A8521727B@nginx.com> <000001da90d4$235b8d80$6a12a880$@roze.lv> Message-ID: Hello, > On 17 Apr 2024, at 6:32 PM, Reinis Rozitis via nginx wrote: > >> *) Feature: the ngx_stream_pass_module. > > Hello, > what is the difference between pass from ngx_stream_pass_module and > proxy_pass from ngx_stream_proxy_module? > > As in what entails "directly" in "allows passing the accepted connection > directly to any configured listening socket"? In case of "pass" there's no proxying, hence zero overhead. The connection is passed to the new listening socket like it was accepted by it. ---- Roman Arutyunyan arut at nginx.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From fusca14 at gmail.com Fri Apr 19 01:14:44 2024 From: fusca14 at gmail.com (Fabiano Furtado Pessoa Coelho) Date: Thu, 18 Apr 2024 22:14:44 -0300 Subject: nginx-1.25.5 In-Reply-To: References: <3294B79D-C5E8-4C13-BC36-A21A8521727B@nginx.com> <000001da90d4$235b8d80$6a12a880$@roze.lv> Message-ID: Hi... On Wed, Apr 17, 2024 at 3:27 PM Roman Arutyunyan wrote: > > Hello, > > On 17 Apr 2024, at 6:32 PM, Reinis Rozitis via nginx wrote: > > *) Feature: the ngx_stream_pass_module. > > > Hello, > what is the difference between pass from ngx_stream_pass_module and > proxy_pass from ngx_stream_proxy_module? > > As in what entails "directly" in "allows passing the accepted connection > directly to any configured listening socket"? > > > In case of "pass" there's no proxying, hence zero overhead. > The connection is passed to the new listening socket like it was accepted by it. Please, can you spot these overheads in proxying? Thanks. From quickfire28 at gmail.com Fri Apr 19 07:36:09 2024 From: quickfire28 at gmail.com (zen zenitram) Date: Fri, 19 Apr 2024 15:36:09 +0800 Subject: I need help with our NGINX set up Message-ID: Good day! We have an Institutional Repository server that uses NGINX as load balancer but we encountered problem when trying to upload documents to the repository. It only accepts maximum of 128 kb of data, but the client_max_body_size 500M;. Is there a way to locate the cause of error. Here are our NGINX configuration files */etc/nginx/nginx.conf* worker_processes auto; pid /run/nginx.pid; include /etc/nginx/modules-enabled/*.conf; events { worker_connections 768; # multi_accept on; } http { # Basic Settings sendfile on; tcp_nopush on; types_hash_max_size 2048; client_max_body_size 500M; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # SSL Settings ## ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE ssl_prefer_server_ciphers on; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip_vary on; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } */etc/nginx/site-available/test.edu.ph * server { server_name test.edu.ph; location / { proxy_pass https://192.168.8.243; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; client_max_body_size 500M; } listen 443 ssl; # managed by Certbot ssl_certificate /etc/letsencrypt/live/test.edu.ph/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/test.edu.ph/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot client_max_body_size 500M; } server { if ($host = test.edu.ph) { return 301 https://$host$request_uri; } # managed by Certbot listen 80; server_name test.edu.ph; return 404; # managed by Certbot client_max_body_size 500M; } Need help with this one. Thank you! -------------- next part -------------- An HTML attachment was scrubbed... URL: From kirill at korins.ky Fri Apr 19 07:37:28 2024 From: kirill at korins.ky (Kirill A. Korinsky) Date: Fri, 19 Apr 2024 09:37:28 +0200 Subject: nginx-1.25.5 In-Reply-To: References: <3294B79D-C5E8-4C13-BC36-A21A8521727B@nginx.com> <000001da90d4$235b8d80$6a12a880$@roze.lv> Message-ID: <8e3761ad08e8afce@mx1.catap.net> On Fri, 19 Apr 2024 03:14:44 +0200, Fabiano Furtado Pessoa Coelho wrote: > > Please, can you spot these overheads in proxying? > Establishing and accepting a brand new connection, writing and reading of requests. Maybe buffering. A lot of useless context switching between user and kernel spaces. With possibility to enjoy not enough free ports. Shall I continue? -- wbr, Kirill From srebecchi at kameleoon.com Fri Apr 19 08:02:00 2024 From: srebecchi at kameleoon.com (=?UTF-8?Q?S=C3=A9bastien_Rebecchi?=) Date: Fri, 19 Apr 2024 10:02:00 +0200 Subject: nginx-1.25.5 In-Reply-To: <8e3761ad08e8afce@mx1.catap.net> References: <3294B79D-C5E8-4C13-BC36-A21A8521727B@nginx.com> <000001da90d4$235b8d80$6a12a880$@roze.lv> <8e3761ad08e8afce@mx1.catap.net> Message-ID: Hello As I understand we better replace all proxy_pass to pass when upstream server is localhost, but pass does not work with remote upstreams. Is that right? Sébastien Le ven. 19 avr. 2024 à 09:38, Kirill A. Korinsky a écrit : > On Fri, 19 Apr 2024 03:14:44 +0200, > Fabiano Furtado Pessoa Coelho wrote: > > > > Please, can you spot these overheads in proxying? > > > > Establishing and accepting a brand new connection, writing and reading of > requests. Maybe buffering. A lot of useless context switching between > user and kernel spaces. With possibility to enjoy not enough free ports. > > Shall I continue? > > -- > wbr, Kirill > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kirill at korins.ky Fri Apr 19 08:38:01 2024 From: kirill at korins.ky (Kirill A. Korinsky) Date: Fri, 19 Apr 2024 10:38:01 +0200 Subject: nginx-1.25.5 In-Reply-To: References: <3294B79D-C5E8-4C13-BC36-A21A8521727B@nginx.com> <000001da90d4$235b8d80$6a12a880$@roze.lv> <8e3761ad08e8afce@mx1.catap.net> Message-ID: <3ec50cf48070b986@mx2.catap.net> On Fri, 19 Apr 2024 10:02:00 +0200, Sébastien Rebecchi wrote: > > As I understand we better replace all proxy_pass to pass when upstream > server is localhost, but pass does not work with remote upstreams. > Is that right? > It depends on your use case I guess. Frankly speaking I don't see any reason to use it, and do not accept connection by target server with one exception: you need some module which exists only for ngx_stream_... -- wbr, Kirill From r at roze.lv Fri Apr 19 10:18:20 2024 From: r at roze.lv (Reinis Rozitis) Date: Fri, 19 Apr 2024 13:18:20 +0300 Subject: I need help with our NGINX set up In-Reply-To: References: Message-ID: <001c01da9242$e784fc40$b68ef4c0$@roze.lv> > It only accepts maximum of 128 kb of data, but the client_max_body_size 500M;. Is there a way to locate the cause of error. Can you actually show what the "error" looks like? The default value of client_max_body_size is 1M so the 128Kb limit most likely comes from the backend application or server which handles the POST request (as an example - PHP has its own post_max_size / upload_max_filesize settings). p.s. while it's unlikely (as you specify the settings in particular location blocks) since you use wildcard includes it is always good to check with 'nginx -T' how the final configuration looks like. Maybe the request isn't handled in server/location block where you expect it .. rr From marcin.wanat at gmail.com Sat Apr 20 14:12:28 2024 From: marcin.wanat at gmail.com (Marcin Wanat) Date: Sat, 20 Apr 2024 16:12:28 +0200 Subject: QUIC: use sendmmsg() with GSO Message-ID: Hi, I discovered a patch for QUIC that enables the use of sendmmsg() with GSO, authored by Roman Arutyunyan: https://mailman.nginx.org/pipermail/nginx-devel/2023-July/4ZTXGDMY2LC4VRZRBNBXGULYHS5DMR3Z.html However, for some reason, this patch has never been merged into the Nginx codebase. According to Cloudflare, it could offer significant performance improvements: https://blog.cloudflare.com/accelerating-udp-packet-transmission-for-quic Could you provide any insights on why this patch has not been merged? Regards, Marcin Wanat From quickfire28 at gmail.com Tue Apr 23 08:49:47 2024 From: quickfire28 at gmail.com (zen zenitram) Date: Tue, 23 Apr 2024 16:49:47 +0800 Subject: I need help with our NGINX set up In-Reply-To: <001c01da9242$e784fc40$b68ef4c0$@roze.lv> References: <001c01da9242$e784fc40$b68ef4c0$@roze.lv> Message-ID: Good day! Here is what happen when we try to upload file more than 128 kb. Too check if it is on server side we run the server without nginx and it can upload larger size files. Thank you! On Fri, Apr 19, 2024 at 6:18 PM Reinis Rozitis via nginx wrote: > > It only accepts maximum of 128 kb of data, but the client_max_body_size > 500M;. Is there a way to locate the cause of error. > > Can you actually show what the "error" looks like? > > The default value of client_max_body_size is 1M so the 128Kb limit most > likely comes from the backend application or server which handles the POST > request (as an example - PHP has its own post_max_size / > upload_max_filesize settings). > > > > p.s. while it's unlikely (as you specify the settings in particular > location blocks) since you use wildcard includes it is always good to check > with 'nginx -T' how the final configuration looks like. Maybe the request > isn't handled in server/location block where you expect it .. > > rr > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: error eprints on nginx.jpg Type: image/jpeg Size: 307084 bytes Desc: not available URL: From alexhus at microsoft.com Tue Apr 23 09:40:43 2024 From: alexhus at microsoft.com (Alex Hussein-Kershaw (HE/HIM)) Date: Tue, 23 Apr 2024 09:40:43 +0000 Subject: Leaky NGINX Plugin Advice Message-ID: Hi Folks, I've inherited an nginx plugin, written against 0.7.69 that has recently been moved to use nginx 1.24.0 to resolve the need to ship old versions of openssl. I've found during performance testing that it's leaking file descriptors. After a few hours running and leaking I hit my configured limit of 100k worker_connections which gets written to logs, and nginx starts "reusing connections". The leaked file descriptors don't show up in the output of "ss", they look like this in lsof: $ /usr/bin/lsof -p 2875952 | grep protocol | head -2 nginx 2875952 user 8u sock 0,8 0t0 2222824178 protocol: TCP nginx 2875952 user 19u sock 0,8 0t0 2266802646 protocol: TCP Googling suggests this may be a socket that has been created but never had a "bind" or "connect" call. I've combed through our plugin code, and am confident it's not responsible for making and leaking these sockets. I should flag two stinkers which may be responsible: * We have "lingering_timeout" set to an hour, a hack to allow long poll / COMET requests to not be torn down before responding. Stopping load and waiting for an hour does drop some of these leaked fds, but not all. After leaking 17k fds, I stopped my load test and saw it drop to 7k fds which appeared to remain indefinitely. Is this a terrible idea? * Within our plugin, we are incrementing the request count field for the same purpose. I'm not really sure why we need both of these, maybe I'm missing something but I can't get COMET polls to work without. I believe that was inspired by Nchan which does something similar. Should I be able to avoid requests getting torn down via this method without lingering_timeout? What could be responsible for these leaked file descriptors and worker connections? I'm unexperienced with nginx so any pointers of where to look are greatly appreciated. Many thanks, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From arut at nginx.com Tue Apr 23 17:50:43 2024 From: arut at nginx.com (Roman Arutyunyan) Date: Tue, 23 Apr 2024 21:50:43 +0400 Subject: nginx-1.26.0 Message-ID: <6C2D9B10-1691-4572-95F1-1752C2F3B9C9@nginx.com> Changes with nginx 1.26.0 23 Apr 2024 *) 1.26.x stable branch. ---- Roman Arutyunyan arut at nginx.com From venefax at gmail.com Thu Apr 25 04:10:44 2024 From: venefax at gmail.com (Saint Michael) Date: Thu, 25 Apr 2024 00:10:44 -0400 Subject: headers do not work Message-ID: I keep getting this error *356 client sent invalid header line: "Finagle-Ctx-com.twitter.finagle.Retries: 0" while reading client request headers, and twitter cannot read my twitter-card yet, I do have underscores_in_headers on; ignore_invalid_headers on; in the http{} block How do I force nginx to disregard any illegal headers? From arut at nginx.com Thu Apr 25 14:19:03 2024 From: arut at nginx.com (Roman Arutyunyan) Date: Thu, 25 Apr 2024 18:19:03 +0400 Subject: QUIC: use sendmmsg() with GSO In-Reply-To: References: Message-ID: <52B24CE5-5D21-42BE-AE19-E3F1E4648EBB@nginx.com> Hi, > On 20 Apr 2024, at 6:12 PM, Marcin Wanat wrote: > > Hi, > > I discovered a patch for QUIC that enables the use of sendmmsg() with > GSO, authored by Roman Arutyunyan: > > https://mailman.nginx.org/pipermail/nginx-devel/2023-July/4ZTXGDMY2LC4VRZRBNBXGULYHS5DMR3Z.html > > However, for some reason, this patch has never been merged into the > Nginx codebase. According to Cloudflare, it could offer significant > performance improvements: > > https://blog.cloudflare.com/accelerating-udp-packet-transmission-for-quic > > Could you provide any insights on why this patch has not been merged? We tested the patch and did not see noticeable performance benefits. We are exploring different approaches to batching input and output packets in QUIC. > Regards, > Marcin Wanat ---- Roman Arutyunyan arut at nginx.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From arut at nginx.com Fri Apr 26 12:20:18 2024 From: arut at nginx.com (Roman Arutyunyan) Date: Fri, 26 Apr 2024 16:20:18 +0400 Subject: headers do not work In-Reply-To: References: Message-ID: Hi, > On 25 Apr 2024, at 8:10 AM, Saint Michael wrote: > > I keep getting this error > *356 client sent invalid header line: > "Finagle-Ctx-com.twitter.finagle.Retries: 0" while reading client > request headers, and twitter cannot read my twitter-card > yet, I do have > underscores_in_headers on; > ignore_invalid_headers on; > in the http{} block > How do I force nginx to disregard any illegal headers? What exactly do you mean by disregard? Do you want nginx to skip them or allow them to pass? The "ignore_invalid_headers on" directive (which is also the default) explicitly enables skipping them, and this fact is reported in log. Turn it off and those characters (dot in your case) will pass. --- Roman Arutyunyan arut at nginx.com From venefax at gmail.com Fri Apr 26 21:19:48 2024 From: venefax at gmail.com (Saint Michael) Date: Fri, 26 Apr 2024 17:19:48 -0400 Subject: headers do not work In-Reply-To: References: Message-ID: I am not using openresty as a proxy but as a web server. Twitter requests an image, and it sends the HTTP request with header "Finagle-Ctx-com.twitter.finagle." This makes nginx abort the operation. I need the operation to be completed as designed. For that, I added underscores_in_headers (on or off) makes no difference. ignore_invalid_headers on (on or off) makes no difference any ideas? On Fri, Apr 26, 2024 at 8:20 AM Roman Arutyunyan wrote: > > Hi, > > > On 25 Apr 2024, at 8:10 AM, Saint Michael wrote: > > > > I keep getting this error > > *356 client sent invalid header line: > > "Finagle-Ctx-com.twitter.finagle.Retries: 0" while reading client > > request headers, and twitter cannot read my twitter-card > > yet, I do have > > underscores_in_headers on; > > ignore_invalid_headers on; > > in the http{} block > > How do I force nginx to disregard any illegal headers? > > What exactly do you mean by disregard? > Do you want nginx to skip them or allow them to pass? > > The "ignore_invalid_headers on" directive (which is also the default) explicitly enables skipping them, and this fact is reported in log. > Turn it off and those characters (dot in your case) will pass. > > --- > Roman Arutyunyan > arut at nginx.com > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx From arut at nginx.com Mon Apr 29 12:27:28 2024 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 29 Apr 2024 16:27:28 +0400 Subject: headers do not work In-Reply-To: References: Message-ID: <20240429122728.xr2gkxmwmfrcqjbx@N00W24XTQX> Hi, On Fri, Apr 26, 2024 at 05:19:48PM -0400, Saint Michael wrote: > I am not using openresty as a proxy but as a web server. Twitter > requests an image, and it sends the HTTP request with header > "Finagle-Ctx-com.twitter.finagle." This makes nginx abort the > operation. > I need the operation to be completed as designed. For that, I added > underscores_in_headers (on or off) makes no difference. > ignore_invalid_headers on (on or off) makes no difference > any ideas? It's very strange that "ignore_invalid_headers off" did not work for you. It disables this error and makes nginx treat the problematic header as normal. Please double-check this. There's no other part of code in nginx that would generate an error like that. And the part that does, can be disabled by "ignore_invalid_headers off". Also, skipping the header should not abort the request. Please check the error log for the real reason why this is happening. > On Fri, Apr 26, 2024 at 8:20 AM Roman Arutyunyan wrote: > > > > Hi, > > > > > On 25 Apr 2024, at 8:10 AM, Saint Michael wrote: > > > > > > I keep getting this error > > > *356 client sent invalid header line: > > > "Finagle-Ctx-com.twitter.finagle.Retries: 0" while reading client > > > request headers, and twitter cannot read my twitter-card > > > yet, I do have > > > underscores_in_headers on; > > > ignore_invalid_headers on; > > > in the http{} block > > > How do I force nginx to disregard any illegal headers? > > > > What exactly do you mean by disregard? > > Do you want nginx to skip them or allow them to pass? > > > > The "ignore_invalid_headers on" directive (which is also the default) explicitly enables skipping them, and this fact is reported in log. > > Turn it off and those characters (dot in your case) will pass. > > > > --- > > Roman Arutyunyan > > arut at nginx.com > > > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > https://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://mailman.nginx.org/mailman/listinfo/nginx -- Roman Arutyunyan