Pipes in modules

Marcus Clyne maccaday at gmail.com
Wed Mar 4 21:34:35 MSK 2009


Hi Manlio,
> You first need to allocate an Nginx connection, with:
>   c = ngx_get_connection(s, self->log);
>
This was the step I was looking for.  Thanks.
>
>   /* Store this instance in the connection data */
>   c->data = self;
>
> ...
> Your handler will be called when there is data in the pipe.
> Here you can retrieve page data (stored somewhere, maybe in the in the 
> pipe you write the memory address, but this may be unsafe), and then 
> call finalize_request.
>
I had also thought about writing an address to the pipe.  I would have 
thought this would be safe, though it might be possible that the 
endianness could change on some systems (my knowledge on that topic is 
limited) - converting to/from network bit/byte order just to be safe 
mightn't be a bad idea just to be safe.

One slightly different question: does it matter how big your generated 
content is with relation to buffer size?  For example:

- handler forwards request to content generator
- content generator generates 1MB content and writes address of content 
to pipe
- pipe read handler retrieves address of content from pipe and finalizes 
the request (or passes onto next filter)

Does the length of the buffer used when passing to the next filter 
matter, i.e. are there any non-operating-system-derived upper limits 
like 4KB or 16KB?  (This is easily testable, which I'll do, but it leads 
onto my next question.)

Even if there aren't any Nginx-imposed limits, is it a good idea to 
manually split your content up into multiple buffers anyway?  If so, 
why?  I understand why it would be useful if you have very large content 
(e.g. content that is comparable to the amount of available memory, or 
even exceeds it), and that the delivery speed would possibly be improved 
for content that is 100's KB or more.  Are there any other reasons?


Thanks again for your help,

Marcus.





More information about the nginx mailing list