Making subrequests from handler
Peter Leonov
pl at inshaker.ru
Sat Nov 28 03:49:10 MSK 2009
On 27.11.2009, at 5:26, agentzh wrote:
> On Fri, Nov 27, 2009 at 10:01 AM, Peter A Leonov <gojpeg at gmail.com> wrote:
>> I'm just wondering if it is possible to perform any subrequests this way.
>
> I think it makes little sense to say "subrequests can only be issued
> from output filters rather than content handlers". Why? Because output
> filters are just plain C functions called by content handlers
> themselves, usually in this form:
>
> ngx_int_t
> my_content_handler(ngx_http_request *r) {
> ...
> rc = ngx_http_send_header(r);
> ...
> return ngx_http_output_filter(r, cl);
> }
>
> The output filter chain is a plain C function call stack. So if an
> output filter can issue subrequests, then logically so does a content
> handler :) There's no real difference in C's context here and no magic
> here.
Oh, now I see. Was so simple to check :)
Thanks a lot!
>
>> The code of SSI module tells me than the filter can do so with ease. But
>> what about handlers? All about the latest nginx 0.8.*.
>>
>
> The "echo" module and the "fancyindex" module both issue subrequests
> from content handler. There's also a workable case in this thread
> (although shaun eventually gave up the subrequest model due to an
> issue of canceling pending subrequets, well, you don't need
> cancellling subrequests in your scenario anyway):
>
> http://www.ruby-forum.com/topic/199392
Yes, that was a good thread. But I'v red it without this task in mind, so had missed much. Will reread it twice %)
>
>> The thing I'm trying to implemet is a simple JavaScript method
>> subrequest(uri, callback). Any number of subrequests may be waited at the
>> same time, and when finishef be combined in random order.
>
> Given the "postponed chain model" currently implemented in Nginx, if
> one does not buffer and rearrange the outputs of the subrequests using
> a custom output filter himself, then the order of the subrequest
> outputs will be exactly the *same* as the order they are originally
> issued even though they're actually run in parallel.
As far as I remember, NGX_HTTP_SUBREQUEST_IN_MEMORY helps me to get all the response body in one memory buffer. Then the single buffer was converted to JS string and passed to the handler. After the callback was invoked JS sends a string to the client (some overhead on copying, I see). This approach was not tested well, so I do not know how stable it is :) All back to the 0.6.* ages.
>
> I must say the basic idea of subrequests called by a scripting
> language like JS is definitely great. Why not scripting all the Nginx
> internal infrastructure directly for our apps? It's really a pity to
> just use Nginx as a boring proxy to the bloated FastCGI apps or even
> to an Apache monster ;)
Totally agreed. Experimental Perl module is not enough thees days.
>
> We're also going to turn coco lua to a first-class citizen in Nginx
> module development just like C. Transparent non-blocking I/O is also a
> plus here when compared to plain C coding. It's interesting to do the
> same with other interpreters if the C-level coroutine support is in
> place.
Lua is cool and JITed. It will be welcomed, no doubt. Especially if built in the config near this way: location /hello { lua { print "hello" } } ;)
>
>> Could you please help me figure it out or just point to a source code?
>>
>
> Please see the source code of the ngx_http_subrequest function in the
> core and the standard postpone filter for more details. The "echo"
> module may help you experimenting various aspects of both parallel and
> sequential subrequests from a content handler.
The ngx_http_subrequest function was the first place I met, my JavaScripted brain wasn't strong enough to deal with all that dependancies with postpone filter. Will read it again and again until nirvana comes to me ;)
>
> Good luck!
> -agentzh
Great thanks to you, agentzh!
Peter.
P.S. The ngx_http_memc_module is exactly what I was looking for!
Take ngx_http_memc_module, add ngx_http_(perl | lua | js | brainfck)_module, than memcachedb, and you'll get the amazing new… em… something very amazing and very new ;)
More information about the nginx-devel
mailing list