<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">Thanks, Sergey.<div class=""><br class=""></div><div class="">We are simulating 1000 clients. Some get cache hits, and some go upstream. So there are more than 1000 connections.</div><div class=""><br class=""></div><div class="">We have 24 workers running, each configured: <span style="color: rgb(23, 43, 77); font-family: SFMono-Medium, "SF Mono", "Segoe UI Mono", "Roboto Mono", "Ubuntu Mono", Menlo, Consolas, Courier, monospace; font-size: 14px; orphans: 2; white-space: pre; widows: 2; background-color: rgb(244, 245, 247);" class="">events { worker_connections 1024; }</span></div><div class=""><br class=""></div><div class="">We are seeing the following errors from nginx:</div><div class=""><span style="color: rgb(23, 43, 77); font-family: SFMono-Medium, "SF Mono", "Segoe UI Mono", "Roboto Mono", "Ubuntu Mono", Menlo, Consolas, Courier, monospace; font-size: 14px; font-variant-ligatures: normal; orphans: 2; white-space: pre; widows: 2; background-color: rgb(244, 245, 247); text-decoration-thickness: initial;" class="">[warn] 21151#21151: 1024 worker_connections are not enough, reusing connections</span></div><div class=""><span style="color: rgb(23, 43, 77); font-family: SFMono-Medium, "SF Mono", "Segoe UI Mono", "Roboto Mono", "Ubuntu Mono", Menlo, Consolas, Courier, monospace; font-size: 14px; font-variant-ligatures: normal; orphans: 2; white-space: pre; widows: 2; background-color: rgb(244, 245, 247); text-decoration-thickness: initial;" class="">[crit] 21151#21151: accept4() failed (24: Too many open files)</span></div><div class=""><span style="color: rgb(23, 43, 77); font-family: SFMono-Medium, "SF Mono", "Segoe UI Mono", "Roboto Mono", "Ubuntu Mono", Menlo, Consolas, Courier, monospace; font-size: 14px; font-variant-ligatures: normal; orphans: 2; white-space: pre; widows: 2; background-color: rgb(244, 245, 247); text-decoration-thickness: initial;" class="">[alert] 21151#21151: *15716 socket() failed (24: Too many open files) while connecting to upstream,</span></div><div class=""><br class=""></div><div class="">I am assuming the second and third error are for the OS limit. But the first seems to be from a worker process.</div><div class=""><br class=""></div><div class="">My assumption is that the client requests will be distributed over the 24 worker processes. So no individual worker should come anywhere close to 1000 connections.</div><div class=""><br class=""></div><div class="">But when I look at the process stats for the workers (ps command), I see a uneven distribution of CPU time used. Note that this is from a different run than the above logs.</div><div class=""><div class=""><font face="Menlo" class="">UID PID PPID C STIME TTY TIME CMD</font></div><div class=""><font face="Menlo" class="">netskrt 16905 16902 2 12:19 ? 00:07:05 nginx: worker process</font></div><div class=""><font face="Menlo" class="">netskrt 16906 16902 1 12:19 ? 00:04:29 nginx: worker process</font></div><div class=""><font face="Menlo" class="">netskrt 16908 16902 1 12:19 ? 00:03:30 nginx: worker process</font></div><div class=""><font face="Menlo" class="">netskrt 16910 16902 0 12:19 ? 00:02:26 nginx: worker process</font></div><div class=""><font face="Menlo" class="">netskrt 16911 16902 0 12:19 ? 00:01:32 nginx: worker process</font></div><div class=""><font face="Menlo" class="">netskrt 16912 16902 0 12:19 ? 00:00:51 nginx: worker process</font></div><div class=""><font face="Menlo" class="">netskrt 16913 16902 0 12:19 ? 00:00:11 nginx: worker process</font></div><div class=""><font face="Menlo" class="">netskrt 16914 16902 0 12:19 ? 00:00:04 nginx: worker process</font></div><div class=""><font face="Menlo" class="">netskrt 16915 16902 0 12:19 ? 00:00:25 nginx: worker process</font></div><div class=""><font face="Menlo" class="">netskrt 16916 16902 0 12:19 ? 00:00:01 nginx: worker process</font></div><div class=""><font face="Menlo" class="">netskrt 16917 16902 0 12:19 ? 00:00:00 nginx: worker process</font></div></div><div class=""><font face="Menlo" class="">...</font></div><div class=""><br class=""></div><div class="">Is there anything we can configure to more evenly distribute the connections?</div><div class=""><br class=""></div><div class="">Thanks…</div><div class=""><br class=""></div><div class="">Roger</div><div class=""><br class=""><div><br class=""><blockquote type="cite" class=""><div class="">On Jun 3, 2022, at 8:40 PM, Sergey A. Osokin <<a href="mailto:osa@freebsd.org.ru" class="">osa@freebsd.org.ru</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div class="">Hi Roger,<br class=""><br class="">hope you're doing well.<br class=""><br class="">On Fri, Jun 03, 2022 at 05:38:07PM -0700, Roger Fischer wrote:<br class=""><blockquote type="cite" class="">Hello,<br class=""><br class="">my understanding is that worker_connections applies to each worker<br class="">(eg. when set to 1024, 10 worker processes could handle up to 10240<br class="">connections).<br class=""></blockquote><br class="">That's exactly right. Please read the following link [1] to get more<br class="">details.<br class=""><br class=""><blockquote type="cite" class="">But we are seeing 1024 worker_connections are not enough, reusing<br class="">connections from one worker while other workers are idle.<br class=""></blockquote><br class="">So, it's possibe to increase the number of those.<br class=""><br class=""><blockquote type="cite" class="">Is there something we can do to balance connections more evenly<br class="">across workers?<br class=""></blockquote><br class="">Could you please add a bit more details on this. Please note, that<br class="">there were several improvements on that topic, so please follow the<br class="">recommendations below.<br class=""><br class=""><blockquote type="cite" class="">nginx version: nginx/1.19.9<br class=""></blockquote><br class="">Recent stable version is 1.22.0, [2] so I'd recommend to update to<br class="">that version.<br class=""><br class="">Thank you.<br class=""><br class="">References<br class="">1. <a href="https://nginx.org/en/docs/ngx_core_module.html#worker_connections" class="">https://nginx.org/en/docs/ngx_core_module.html#worker_connections</a><br class="">2. <a href="http://nginx.org/en/CHANGES-1.22" class="">http://nginx.org/en/CHANGES-1.22</a><br class=""><br class="">--<br class="">Sergey A. Osokin<br class="">_______________________________________________<br class="">nginx mailing list -- <a href="mailto:nginx@nginx.org" class="">nginx@nginx.org</a><br class="">To unsubscribe send an email to <a href="mailto:nginx-leave@nginx.org" class="">nginx-leave@nginx.org</a><br class=""></div></div></blockquote></div><br class=""></div></body></html>