<div><br></div><div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue 17. 8. 2021 at 15:49, Maxim Konovalov <<a href="mailto:maxim@nginx.com">maxim@nginx.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;padding-left:1ex;border-left-color:rgb(204,204,204)">Hi Rob,<br>
<br>
On 17.08.2021 06:35, Robert Mueller wrote:<br>
> # HG changeset patch<br>
> # User Rob Mueller <<a href="mailto:robm@fastmail.fm" target="_blank">robm@fastmail.fm</a>><br>
> # Date 1629171218 14400<br>
> # Mon Aug 16 23:33:38 2021 -0400<br>
> # Node ID 89ff95b268e9817b344447b7e6785354229a4bab<br>
> # Parent dda421871bc213dd2eb3da0015d6228839323583<br>
> Mail: add the "reuseport" option of the "listen" directive<br>
> <br>
> The "reuseport" option was added to the "listen" directive of the http<br>
> and stream modules in 1.9.1, but it wasn't added to the mail module. This<br>
> adds the option to the mail module to make it consistent with the http<br>
> and stream modules.<br>
> <br>
> In newer linux kernel versions (somewhere between 4.9 and 5.10) this<br>
> option seems much more important. On production debian servers, not having<br>
> or using this option caused processes to become very unbalanced. With 8<br>
> worker processes, we would see one worker process accepting 70%+ of all<br>
> connections, a second process with about 10% or so, and the remaining<br>
> 20% of connections spread over the other 6 processes. This obviously<br>
> started causing problems as the worker process accepting the majority<br>
> of connections would end up being 100% CPU bound well before the servers<br>
> overall capacity.<br>
> <br>
> Adding and enabling this option fixed this entirely, and now all<br>
> worker processes seem to accept and even spread of connections.<br>
> <br>
First, thanks for the patch.<br>
<br>
While the reuseport could cure (or hide if you will) the unbalancing you<br>
see it makes sense to get better understanding what exactly is going on.<br>
So far we haven't seen such weird behaviour ourself neither received<br>
reports about such uneven connections distribution among nginx workers.</blockquote><div dir="auto"><br></div><div dir="auto"><div dir="auto" style="font-size:1rem;word-spacing:1px;border-color:rgb(49,49,49);color:rgb(49,49,49)">Hello!</div><div dir="auto" style="word-spacing:1px;border-color:rgb(49,49,49);color:rgb(49,49,49)"><br></div><div dir="auto" style="font-size:1rem;word-spacing:1px;border-color:rgb(49,49,49);color:rgb(49,49,49)">It looks exactly like known linux epoll behavior, which is nicely explained here:</div><div dir="auto" style="word-spacing:1px;border-color:rgb(49,49,49);color:rgb(49,49,49)"><div style="border-color:rgb(49,49,49)"><a href="https://blog.cloudflare.com/the-sad-state-of-linux-socket-balancing/" target="_blank" style="font-size:1rem;border-color:rgb(66,133,244)">https://blog.cloudflare.com/the-sad-state-of-linux-socket-balancing/</a></div><br></div><div dir="auto" style="font-size:1rem;word-spacing:1px;border-color:rgb(49,49,49);color:rgb(49,49,49)">Best, Jan Prachař</div></div><div dir="auto"><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;padding-left:1ex;border-left-color:rgb(204,204,204)" dir="auto"><br>
<br>
Any chances you have accept_mutex and/or multi_accept? Any other ideas?<br>
<br>
-- <br>
Maxim Konovalov<br>
_______________________________________________<br>
nginx-devel mailing list<br>
<a href="mailto:nginx-devel@nginx.org" target="_blank">nginx-devel@nginx.org</a><br>
<a href="http://mailman.nginx.org/mailman/listinfo/nginx-devel" rel="noreferrer" target="_blank">http://mailman.nginx.org/mailman/listinfo/nginx-devel</a><br>
</blockquote></div></div>