aggregating multiple upstream responses before finalizing the downstre am request

François Battail fb at francois.battail.name
Fri May 16 16:41:14 MSD 2008


dchapiesky at ... <dchapiesky at ...> writes:

> I am working on an nginx module. The simplest description of what it has to do
can be summed up as a "voting based fault tolerance algorithm".  

> B) the headend nginx server (which has my module in it...) receives the
request and passes the request on to three or more upstream servers...
upstream1, upstream2, and upstream3

Sorry if I don't interpret correctly your idea but you really mean you want to
issue the same request on each of the upstream servers in a parallel way? 

> D) during this time, the headend server is waiting for all three upstream
responses

You mean actively waiting or waiting for an event on the upstream connection? If
you have to do an active wait waiting for all of the upstream servers to reply
it will be catastrophic because you already divided by 3 the processing power of
the uptreams servers and in case of the failure of one server latency will be
added due to error processing for *each request* done!

> Question: do you think that multiple parallel upstream requests are possible
as a module and not a branch of the nginx source?

What would be interesting AMHO would be a heartbeat protocol checking upstream
health, may be as a thread inside Nginx and manipulating upstream status (it's
not so easy to do because there could be multiple workers and critical section
issues). But sadly, no light heartbeat protocol is defined and normalized.

> Question: Is there a mechanism by which I can inject a request into the nginx
queue as if it came from a downstream client?

Yes, I've done it, *but* for now it's Linux [recent kernel] only by using
signalfd() and a special event module based on the epoll one with circular
buffers, messaging and dedicated worker threads.

> Thanks for any comments on my endeavor.  I have studied Evan Miller's guide to
module development and have followed the forum intently when internals have been
discussed.  I am still learning nginx internals though...

Yes, me too! My advice: take your time!

> Question: Is the any documentation or better yet, flow graphs, describing how
requests are handled?

That's why we all finished to land on this list ;-)

Best regards






More information about the nginx mailing list