aggregating multiple upstream responses before finalizing the down stre am request

dchapiesky at dchapiesky at
Fri May 16 18:11:57 MSD 2008

---------- Forwarded Message ----------
dchapiesky at ... <dchapiesky at ...> writes:

>> B) the headend nginx server (which has my module in it...) receives the request and passes the request on to three or more upstream servers... upstream1, upstream2, and upstream3

>Sorry if I don't interpret correctly your idea but you really mean you want to issue the same request on each of the upstream servers in a parallel way? 

That is correct... the single request from downstream is to be sent to a plurality of upstream servers in a parallel manner such that there would be 2 or more pending upstream connections for the given downstream request.

>> D) during this time, the headend server is waiting for all three upstream responses

>You mean actively waiting or waiting for an event on the upstream connection? If you have to do an active wait waiting for all of the upstream servers to reply it will be catastrophic because you already divided by 3 the processing power of the uptreams servers and in case of the failure of one server latency will be added due to error processing for *each request* done!
The requests would be non-blocking.  Since nginx already does not block on upstream requests, I simply want to add the facility for multiple upstream requests - I'm looking at either: daisy chaining the requests (request->upstream->second upstream->third upstream->etc.. where the processing of multiples upstreams is confined to the upstream module) or making the request->upstream in the http module into an array of upstreams (i.e. request->*upstream)

>> Question: do you think that multiple parallel upstream requests are possible as a module and not a branch of the nginx source?

>What would be interesting AMHO would be a heartbeat protocol checking upstream health, may be as a thread inside Nginx and manipulating upstream status (it's not so easy to do because there could be multiple workers and critical section issues). But sadly, no light heartbeat protocol is defined and normalized.

The focus of my module is to -compare- results from multiple parallel servers, not monitoring upstream server health.   
The closer-to-real-world-example is: 
old server A with old crappy hard to maintain (hmmm Apache anyone) database app
new server B with new shiny happy easy to configure (***nginx***) front end to new shiny database app
new server C with nginx (and my fault tolerance module) acting as proxy to servers A and B
If the process of migrating the database app from server A to B breaks the results returned by the app, then tell me with a POST to my own server... cause we gotta know if the two apps are not returning the same data....
When the new server B is --verified-- to be equivilent to old server A, then we can retire server A --- but not before.   

>> Question: Is there a mechanism by which I can inject a request into the nginx
queue as if it came from a downstream client?

>Yes, I've done it, *but* for now it's Linux [recent kernel] only by using signalfd() and a special event module based on the epoll one with circular buffers, messaging and dedicated worker threads.

It seems to me that I could send the request to be injected to the port I'm listening on... duh. But I'm incurring the overhead of going out to the tcp/ip stack and OS and then returning.  Surely there is a way of chaining a new request into the queue without having to resort to that.  I know this is a dumb question but it is puzzling me.  

>> Thanks for any comments on my endeavor.  I have studied Evan Miller's guide to module development and have followed the forum intently when internals have been discussed.  I am still learning nginx internals though...

>Yes, me too! My advice: take your time!

>> Question: Is the any documentation or better yet, flow graphs, describing how requests are handled?

>That's why we all finished to land on this list ;-)

Thanks for the response so far!

Click to see huge collection of discounted designer watches.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the nginx mailing list