Http: make ngx_http_init_listening a public api

Maxim Dounin mdounin at mdounin.ru
Mon May 22 13:18:46 UTC 2017


Hello!

On Mon, May 22, 2017 at 01:17:40AM +0800, 胡聪 (hucc) wrote:

> Hi,
> 
> On Saturday, May 20, 2017 1:55 AM +0300, Maxim Dounin wrote:
> 
> >On Fri, May 19, 2017 at 03:04:43AM +0800, 0 at lvht.net wrote:
> >
> >> Thank you for your reply.
> >> ********
> >> Please let me assume it make sense to make nginx work as a
> >> standalone websock server. The problem we will face is that we
> >> cannot send message to client. Because when the client initial
> >> tcp connection, the connection will be processed by one worker
> >> uncertainly. If we want push message to one client, we must find
> >> the worker processing this client connection.
> >> *********
> >> It is also impossible to add new listening port to an http
> >> server directive dynamically in the websocket model, but it will
> >> be a huge burden to sync this logic to the http model. So I
> >> propose nginx to open this api.
> >
> >Thank you for the detailed explanation.
> >
> >So, you are trying to introduce per-process listening sockets in
> >order to be able to communicate with a particular worker process.
> >
> >This is highly unlikely to work properly without deep integration
> >with nginx itself, and/or will be broken by trivial unrelated
> >changes in nginx.  So the answer is still negative.
> >
> >If you want to try to address the problem properly, consider the
> >following approaches:
> >
> >- create listening sockets yourself, and handle connections in
> >  your module;
> >
> >- introduce per-process listining sockets in nginx core; likely
> >  there will many obscure problems though, including things like
> >  what to do on graceful shutdown;
> >
> >- use interprocess communication instead - for example, nginx has
> >  shared memory available to modules (there are also channels to
> >  pass messages between processes, but these are not currently
> >  available to modules).
> 
> It is necessary to introduce per-process listening sockets in nginx core.
> Laster year, I have done most of the work, when I found it is hard to establish
> the connection to push data to another process in ngx_rtmp_auto_push_moudle.
> The problem can be solved by introduce a flag aloneport in ngx_listening_t and
> work with ls->worker. The configuration is listen ** [aloneport=ngx_worder].
> And as far as I know, there is an obscure problem which is what to do on reload
> when new worker_processes number is less than the no-update ngx_worker of
> aloneport. As for the problem about what to do on graceful shutdown, I still
> don`t get it. Besides, I hope that arut will provide some ideas.
> Another problem/idea found in ngx_rtmp_module, it is also necessary to introduce
> a config options like NGX_3RD_CORE_MODULES (or other way or options) to ensure
> the 3rd party CORE modules is behind ngx_events_module. It is going to solve
> some obscure problems where the solution in ngx_rtmp_module is unfriendly
> and incomprehensible.
> 
> If you guys also think it is needed, I will provide the patch of aloneport
> once time permits.

I personally think that per-process listening sockets is not a 
good solution from architectural point of view - while they 
provide something immediate and easy-to-use, they 1) introduce (likely 
unsolvable) problems during shutdown / configuration reload, and 
2) they don't really solve anything as long as the problem is a 
little bit more complicated than "send something to one client" - 
for example, it is not possible to send something to a group of 
clients.  Not to mention that it is very non-trivial to use if you 
don't know ports in advance, and also breaks basic nginx model of 
multiple identical worker processes.

So, just for the record: I don't think it is needed, and would 
rather suggest to improve interprocess communications instead.  I 
can be convinced that it is needed though, as long as benefits are 
clearly demonstrated and downsides are understood and addressed 
somehow.

-- 
Maxim Dounin
http://nginx.org/


More information about the nginx-devel mailing list