aggregating multiple upstream responses before finalizing the down stre am request
dchapiesky at juno.com
Fri May 16 19:20:25 MSD 2008
François Battail <fb at ...> writes:
> Well, that way I understand, but I think it's still a waste of ressources,why
> don't you put a X-Header on the application server like
> X-My-Application-Version="x.y.z" and issue an HTTP HEAD request on a special
> each hour by a script outside Nginx to do the monitoring, except if there are
> some points I don't understand, it make no sense for me to monitor something
> like this for *each* request, since it's not request related.
Server monitoring is not the target of the this module...
Every request *is* important...
The point of this nginx module/modification is to:
A) act as a proxy sitting in front of a legacy application server and a new
The two application servers MUST return the same results for a given GET
B) compare the responses from the legacy application and that of the new
application server... If the responses are different, then the instance should
be logged for future review of the new application server's development.
Imagine you have a crappy apache server sitting on top of a bunch of badly
written php/python scripts which are talking to very expensive and very slow
oracle databases which are costing you an enormous amount of money....
Now imagine that you have an nginx server with fastcgi to a well written
php/python set of scripts which in turn front a mysql database...
Your company DEPENDS ON the old crappy system and has hundreds of gigabytes of
data sitting in the oracle database...
Changing the look and feel of the web application is NOT AN OPTION because
employees are dumb and your call center will be flooded with calls about "how
do I do this..? I did it yesterday... but it doesn't work today..."
SO... you keep running the crappy server because it is the gold standard by
which you measure the accuracy of the new servers....
Accuracy is not just measured in appearance but the actual information being
provided. We are porting away from an oracle database with embedded sql
triggers which are now more easily maintained python scripts....
The information returned from the old server "is always considered correct"...
the information returned from the new server "is always considered suspect"...
Yet testing the new server in a sterile environment is not an option because
there is no way to test all aspects of the system without real world data...
So you write a proxy which sends all requests (GET/POST/HEAD/DELETE) to both
the old and new servers, thus testing the new application while simultaneously
keeping the old server up to date.
In the process, you get metrics on how much better the new system is than the
old which makes sure you get a pay check next week. You also get a clear log
of all differences between you work-in-progress new system and the old gold
> > It seems to me that I could send the request to be injected to the port
> > I'm listening on... duh. But I'm incurring the overhead of going out to
> > the tcp/ip stack and OS and then returning. Surely there is a way of
> > chaining a new request into the queue without having to resort to that.
> > I know this is a dumb question but it is puzzling me.
> It's not a dumb question; it's a complex question. The main problem is to get
> back the context when you have the event and without interfering with the
> processing of Nginx; it takes me weeks to find and validate a clean approach.
> The idea of spoofing, injecting or to fake data in a software like Nginx is
> very dangerous.
I agree that that can be dangerous in a server in the wild....
In this case, it allows for a testing module to be an adjunct to the compare
Still - the question remains... is it possible to have multiple upstream
requests for a given downstream request?
More information about the nginx