From t.stark at f5.com Tue Jun 6 19:40:44 2023 From: t.stark at f5.com (Timo Stark) Date: Tue, 6 Jun 2023 19:40:44 +0000 Subject: AW: NGINX Unit Community Call No.1 In-Reply-To: References: Message-ID: Dear Unit Community Members, We would like to kindly remind you of the forthcoming NGINX Unit community call, which is set to take place tomorrow, June 7th. This is a friendly reminder to ensure that you don't miss out on this exciting event. To secure your participation, we kindly request you to register at the following link: https://unit.nginx.org/news/2023/nginx-unit-community-call-no-1/ We eagerly anticipate your presence and active engagement during this significant gathering. This community call serves as an invaluable platform for us to come together as a community, exchange ideas, share insights, and foster meaningful collaborations. We firmly believe that your unique perspectives and contributions will greatly enrich our discussions and further strengthen our collective knowledge. Best, Timo and the Unit Team. Von: unit im Auftrag von Timo Stark via unit Gesendet: Donnerstag, 25. Mai 2023 17:44 An: unit at nginx.org Cc: Timo Stark Betreff: NGINX Unit Community Call No.1 EXTERNAL MAIL: unit-bounces at nginx.org Exciting news! We’re holding our first community call for NGINX Unit on June 7. We’ll cover new features, future direction and roadmap, and host a Q&A / discussion. Learn more and register: https://unit.nginx.org/news/2023/nginx-unit-community-call-no-1/ _______________________________________________ unit mailing list unit at nginx.org https://mailman.nginx.org/mailman/listinfo/unit From p.tkatchenko at bimp.fr Tue Jun 20 09:31:17 2023 From: p.tkatchenko at bimp.fr (Peter TKATCHENKO) Date: Tue, 20 Jun 2023 09:31:17 +0000 Subject: Strange Unit failure Message-ID: Hello, We had a strange failure of Unit yesterday. I would like to get your advise about a possible reason of the problem (and how to avoid it in the future). We use Unit as PHP application server on FreeBSD 13.2 (in a jail). Unit is behind Nginx (connected using unix socket). PHP configuration of Unit is as follows: "processes": { "max": 40, "spare": 20, "idle_timeout": 60 }, "limits": { "timeout": 1200, "requests": 10000 }, The version of Unit is 1.29.1, the version of PHP is 8.2.3. External access to the application is proxied by HAProxy with sticky sessions configured. Direct access to NGINX servers is possible too (with another URLs). Yesterday we began to (randomly) get 503 errors on different pages of the application. The same page could be OK for one user, but failed with 503 error for other user. The source of these errors was Unit. We could isolate the problems and solved them restarting Unit service on 2 servers (of 3). We did not find any useful information in Unit logs. Nothing interesting in NGINX logs neither. I suppose that some Unit workers were in strange semi-failured state, but the master process did not kill them and continued to send them the requests. Any ideas? Best regards, Peter -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew at digital-domain.net Tue Jun 20 22:00:30 2023 From: andrew at digital-domain.net (Andrew Clayton) Date: Tue, 20 Jun 2023 23:00:30 +0100 Subject: Strange Unit failure In-Reply-To: References: Message-ID: <20230620230030.074cca0d@kappa.digital-domain.net> On Tue, 20 Jun 2023 09:31:17 +0000 Peter TKATCHENKO wrote: > Hello, Hi, [...] > External access to the application is proxied by HAProxy with sticky > sessions configured. Direct access to NGINX servers is possible too > (with another URLs). You mention proxies. Are you using the "proxy" setting in your config? > Yesterday we began to (randomly) get 503 errors on different pages of > the application. The same page could be OK for one user, but failed > with 503 error for other user. The source of these errors was Unit. We Are the error cases using WebSockets? > could isolate the problems and solved them restarting Unit service on > 2 servers (of 3). We did not find any useful information in Unit logs. > Nothing interesting in NGINX logs neither. > > I suppose that some Unit workers were in strange semi-failured state, > but the master process did not kill them and continued to send them > the requests. > > Any ideas? Could you confirm if you are or aren't seeing actual crashes in the unit processes? Do you see process exited on signal 11 (or maybe 6) messages in the unit log for example or core dumps being generated? > Best regards, > > Peter Cheers, Andrew From p.tkatchenko at bimp.fr Wed Jun 21 13:48:19 2023 From: p.tkatchenko at bimp.fr (Peter TKATCHENKO) Date: Wed, 21 Jun 2023 13:48:19 +0000 Subject: Strange Unit failure In-Reply-To: <20230620230030.074cca0d@kappa.digital-domain.net> References: <20230620230030.074cca0d@kappa.digital-domain.net> Message-ID: <76878c8e-6f43-8f35-a3fd-32d58b87c429@bimp.fr> Thanks for your answer, Andrew. >> External access to the application is proxied by HAProxy with sticky >> sessions configured. Direct access to NGINX servers is possible too >> (with another URLs). > You mention proxies. Are you using the "proxy" setting in your config? I have proxy_pass http://unit_backend$request_uri; in NGINX config, that's all. Nothing in Unit config. >> Yesterday we began to (randomly) get 503 errors on different pages of >> the application. The same page could be OK for one user, but failed >> with 503 error for other user. The source of these errors was Unit. We > Are the error cases using WebSockets? Non, we don't use WebSockets. >> could isolate the problems and solved them restarting Unit service on >> 2 servers (of 3). We did not find any useful information in Unit logs. >> Nothing interesting in NGINX logs neither. >> >> I suppose that some Unit workers were in strange semi-failured state, >> but the master process did not kill them and continued to send them >> the requests. >> >> Any ideas? > Could you confirm if you are or aren't seeing actual crashes in the unit > processes? > > Do you see process exited on signal 11 (or maybe 6) messages in the unit > log for example or core dumps being generated? Yes, I see many logs with signal 11 just after several writev failed: 2023/06/19 16:52:06 [info] 56492#102962 *39777 writev(21, 3) failed (32: Broken pipe) 2023/06/19 16:52:22 [info] 56492#102962 *39738 writev(22, 3) failed (32: Broken pipe) 2023/06/19 16:52:46 [info] 56492#102968 *39753 writev(30, 3) failed (32: Broken pipe) 2023/06/19 16:53:10 [alert] 56493#101705 app process 11701 exited on signal 11 2023/06/19 16:53:11 [alert] 56493#101705 app process 84631 exited on signal 11 2023/06/19 16:53:11 [alert] 56493#101705 app process 98305 exited on signal 11 I don't think Unitd worker can write his dumps somewhere as it is started as www user, so hi has no write permissions. Peter From andrew at digital-domain.net Fri Jun 23 16:03:48 2023 From: andrew at digital-domain.net (Andrew Clayton) Date: Fri, 23 Jun 2023 17:03:48 +0100 Subject: Strange Unit failure In-Reply-To: <76878c8e-6f43-8f35-a3fd-32d58b87c429@bimp.fr> References: <20230620230030.074cca0d@kappa.digital-domain.net> <76878c8e-6f43-8f35-a3fd-32d58b87c429@bimp.fr> Message-ID: <20230623170348.30a8b688@kappa.digital-domain.net> On Wed, 21 Jun 2023 13:48:19 +0000 Peter TKATCHENKO wrote: > Thanks for your answer, Andrew. NP. > >> External access to the application is proxied by HAProxy with sticky > >> sessions configured. Direct access to NGINX servers is possible too > >> (with another URLs). > > You mention proxies. Are you using the "proxy" setting in your config? > > I have > > proxy_pass http://unit_backend$request_uri; > > in NGINX config, that's all. Nothing in Unit config. OK. > >> Yesterday we began to (randomly) get 503 errors on different pages of > >> the application. The same page could be OK for one user, but failed > >> with 503 error for other user. The source of these errors was Unit. We > > Are the error cases using WebSockets? > > Non, we don't use WebSockets. OK. > > Do you see process exited on signal 11 (or maybe 6) messages in the unit > > log for example or core dumps being generated? > > Yes, I see many logs with signal 11 just after several writev failed: Right... > 2023/06/19 16:52:06 [info] 56492#102962 *39777 writev(21, 3) failed (32: > Broken pipe) > 2023/06/19 16:52:22 [info] 56492#102962 *39738 writev(22, 3) failed (32: > Broken pipe) > 2023/06/19 16:52:46 [info] 56492#102968 *39753 writev(30, 3) failed (32: > Broken pipe) > 2023/06/19 16:53:10 [alert] 56493#101705 app process 11701 exited on > signal 11 > 2023/06/19 16:53:11 [alert] 56493#101705 app process 84631 exited on > signal 11 > 2023/06/19 16:53:11 [alert] 56493#101705 app process 98305 exited on > signal 11 > > I don't think Unitd worker can write his dumps somewhere as it is > started as www user, so hi has no write permissions. So this will be hard to diagnose without either a backtrace from a core dump or a reliable way to reproduce it. I'm not sure how to do it on FreeBSD (on Linux it's just a matter of editing /proc/sys/kernel/core_pattern, and these days coredumps get piped through to systemd by default), but perhaps you can have the coredumps go to /var/tmp or something? Or if you have an easy way to reproduce it? Cheers, Andrew From maxim at nginx.com Fri Jun 23 16:12:02 2023 From: maxim at nginx.com (Maxim Konovalov) Date: Fri, 23 Jun 2023 09:12:02 -0700 Subject: Strange Unit failure In-Reply-To: <20230623170348.30a8b688@kappa.digital-domain.net> References: <20230620230030.074cca0d@kappa.digital-domain.net> <76878c8e-6f43-8f35-a3fd-32d58b87c429@bimp.fr> <20230623170348.30a8b688@kappa.digital-domain.net> Message-ID: Hi […] > So this will be hard to diagnose without either a backtrace from a core > dump or a reliable way to reproduce it. > > I'm not sure how to do it on FreeBSD (on Linux it's just a matter of > editing /proc/sys/kernel/core_pattern, and these days coredumps > get piped through to systemd by default), but perhaps you can have the > coredumps go to /var/tmp or something? > To enable coredumps on FreeBSD I’d recommend something like that # mkdir -p /var/coredumps && chmod 1777 /var/coredumps # sysctl kern.corefile="/var/coredumps/%N.%P.core” # sysctl kern.sugid_coredump=1 You may want to put the above sysctl’s in /etc/sysctl.conf. Make sure that you have coredumps enabled at all (enabled by default) # sysctl kern.coredump You may also want to adjust the core file size limit in /etc/login.conf (pay attention to the comments at the top of the file) but by default there is no limit. -- Maxim Konovalov From osa at freebsd.org.ru Fri Jun 23 17:04:34 2023 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Fri, 23 Jun 2023 20:04:34 +0300 Subject: Strange Unit failure In-Reply-To: References: Message-ID: Hi there, On Tue, Jun 20, 2023 at 09:31:17AM +0000, Peter TKATCHENKO wrote: > > We had a strange failure of Unit yesterday. I would like to get your advise about a possible reason of the problem (and how to avoid it in the future). > > We use Unit as PHP application server on FreeBSD 13.2 (in a jail). Unit is behind Nginx (connected using unix socket). PHP configuration of Unit is as follows: > > "processes": { > "max": 40, > "spare": 20, > "idle_timeout": 60 > }, > > "limits": { > "timeout": 1200, > "requests": 10000 > }, > > The version of Unit is 1.29.1, the version of PHP is 8.2.3. Fresh versions of both products (1.30.0 and php82-8.2.7) are already available in FreeBSD ports tree and as packages as well. I'd recommend to upgrade both products and try to reproduce the issue. Thank you. -- Sergey A. Osokin From osa at freebsd.org.ru Fri Jun 23 17:06:32 2023 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Fri, 23 Jun 2023 20:06:32 +0300 Subject: Strange Unit failure In-Reply-To: <76878c8e-6f43-8f35-a3fd-32d58b87c429@bimp.fr> References: <20230620230030.074cca0d@kappa.digital-domain.net> <76878c8e-6f43-8f35-a3fd-32d58b87c429@bimp.fr> Message-ID: On Wed, Jun 21, 2023 at 01:48:19PM +0000, Peter TKATCHENKO wrote: > > >> External access to the application is proxied by HAProxy with sticky > >> sessions configured. Direct access to NGINX servers is possible too > >> (with another URLs). > > You mention proxies. Are you using the "proxy" setting in your config? > > I have > > proxy_pass http://unit_backend$request_uri; > > in NGINX config, that's all. Nothing in Unit config. Could you please provide a bit more details on this, i.e. the full `server {}' block. Thanks. -- Sergey A. Osokin From p.tkatchenko at bimp.fr Mon Jun 26 08:14:35 2023 From: p.tkatchenko at bimp.fr (Peter TKATCHENKO) Date: Mon, 26 Jun 2023 08:14:35 +0000 Subject: Strange Unit failure In-Reply-To: <20230623170348.30a8b688@kappa.digital-domain.net> References: <20230620230030.074cca0d@kappa.digital-domain.net> <76878c8e-6f43-8f35-a3fd-32d58b87c429@bimp.fr> <20230623170348.30a8b688@kappa.digital-domain.net> Message-ID: On 23/06/2023 18:03, Andrew Clayton wrote: > So this will be hard to diagnose without either a backtrace from a core > dump or a reliable way to reproduce it. > > I'm not sure how to do it on FreeBSD (on Linux it's just a matter of > editing /proc/sys/kernel/core_pattern, and these days coredumps > get piped through to systemd by default), but perhaps you can have the > coredumps go to /var/tmp or something? > > Or if you have an easy way to reproduce it? > > Cheers, > Andrew This is a key problem in the debugging of this case. We don't know exactly the reason of this problem, and we cannot reproduce it. We had this problem at the same time on two servers so I suppose that there was an attack at php level that crashed Unit. As we cannot reproduce the attack - we cannot reproduce the crash. Best regards, Peter From p.tkatchenko at bimp.fr Mon Jun 26 08:16:39 2023 From: p.tkatchenko at bimp.fr (Peter TKATCHENKO) Date: Mon, 26 Jun 2023 08:16:39 +0000 Subject: Strange Unit failure In-Reply-To: References: Message-ID: <36cb4261-808e-1791-0160-c23583f69c0a@bimp.fr> On 23/06/2023 19:04, Sergey A. Osokin wrote: > Fresh versions of both products (1.30.0 and php82-8.2.7) are already available > in FreeBSD ports tree and as packages as well. > I'd recommend to upgrade both products and try to reproduce the issue. > > Thank you. > Yes, I saw it, we upgraded Unit and PHP on one server, if we don't see any problem - we upgrade them on another servers shortly. Best regards, Peter From p.tkatchenko at bimp.fr Mon Jun 26 08:27:47 2023 From: p.tkatchenko at bimp.fr (Peter TKATCHENKO) Date: Mon, 26 Jun 2023 08:27:47 +0000 Subject: Strange Unit failure In-Reply-To: References: <20230620230030.074cca0d@kappa.digital-domain.net> <76878c8e-6f43-8f35-a3fd-32d58b87c429@bimp.fr> Message-ID: <8abdaa17-09aa-9729-9927-5a9c7f3fb4da@bimp.fr> On 23/06/2023 19:06, Sergey A. Osokin wrote: >> I have >> >> proxy_pass http://unit_backend$request_uri; >> >> in NGINX config, that's all. Nothing in Unit config. > Could you please provide a bit more details on this, i.e. > the full `server {}' block. > > The config is huge, with many includes, some of them provides the information that could be used to organize remote attacks. So I prefer to send you the config in PM to avoid worldwide sharing ;) Thanks, Peter From osa at freebsd.org.ru Tue Jun 27 14:03:01 2023 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Tue, 27 Jun 2023 17:03:01 +0300 Subject: Strange Unit failure In-Reply-To: <8abdaa17-09aa-9729-9927-5a9c7f3fb4da@bimp.fr> References: <20230620230030.074cca0d@kappa.digital-domain.net> <76878c8e-6f43-8f35-a3fd-32d58b87c429@bimp.fr> <8abdaa17-09aa-9729-9927-5a9c7f3fb4da@bimp.fr> Message-ID: Hi Peter, On Mon, Jun 26, 2023 at 08:27:47AM +0000, Peter TKATCHENKO wrote: > The config is huge, with many includes, some of them provides the > information that could be used to organize remote attacks. So I prefer > to send you the config in PM to avoid worldwide sharing ;) I've received the configuration file, it's better to remove all sensitive information and share it here for future references. In any case, could you explain the proxy_pass configuration proxy_pass http://unit_backend$request_uri; Why it's necessary to send $request_uri to the NGINX Unit backend? Thank you. -- Sergey A. Osokin From p.tkatchenko at bimp.fr Thu Jun 29 13:35:04 2023 From: p.tkatchenko at bimp.fr (Peter TKATCHENKO) Date: Thu, 29 Jun 2023 13:35:04 +0000 Subject: Strange Unit failure In-Reply-To: References: <20230620230030.074cca0d@kappa.digital-domain.net> <76878c8e-6f43-8f35-a3fd-32d58b87c429@bimp.fr> <8abdaa17-09aa-9729-9927-5a9c7f3fb4da@bimp.fr> Message-ID: On 27/06/2023 16:03, Sergey A. Osokin wrote: > could you explain the proxy_pass configuration > proxy_pass http://unit_backend$request_uri; > > Why it's necessary to send $request_uri to the NGINX Unit backend? > Hmm, maybe I'm wrong but it seems to be necessary to process 'strange' paths like https://.../xxxx/index.php/yyyy/ssss This application does not seem to use such paths, but I prefer to use such 'standard' config for any php app. Please, explain me why I should not send $request_uri if I am wrong. Thanks, Peter From osa at freebsd.org.ru Fri Jun 30 00:12:16 2023 From: osa at freebsd.org.ru (Sergey A. Osokin) Date: Fri, 30 Jun 2023 03:12:16 +0300 Subject: Strange Unit failure In-Reply-To: References: <20230620230030.074cca0d@kappa.digital-domain.net> <76878c8e-6f43-8f35-a3fd-32d58b87c429@bimp.fr> <8abdaa17-09aa-9729-9927-5a9c7f3fb4da@bimp.fr> Message-ID: On Thu, Jun 29, 2023 at 01:35:04PM +0000, Peter TKATCHENKO wrote: > On 27/06/2023 16:03, Sergey A. Osokin wrote: > > could you explain the proxy_pass configuration > > proxy_pass http://unit_backend$request_uri; > > > > Why it's necessary to send $request_uri to the NGINX Unit backend? > > Please, explain me why I should not send $request_uri if I am wrong. I don't think I know all of your cases, but I believe that $request_uri is needless in this case, so it's very easy to test: 1. create two `server {}' blocks, one with the $request_uri and another one without; 2. setup an application backend to return $request_uri; 3. send a request for every server with the curl utility and get two responses; 4. observe a difference (I believe there's no difference there); 5. profit! -- Sergey A. Osokin