sending data in "chunks"

Manlio Perillo manlio_perillo at libero.it
Tue Sep 18 23:16:45 MSD 2007


Igor Sysoev ha scritto:
> On Sun, Sep 16, 2007 at 10:51:10PM +0200, Manlio Perillo wrote:
> 
>> Igor Sysoev ha scritto:
>>> [...]
>>> It's NGX_AGAIN.
>>> If you have got all your data ready you may send them at once in one chain.
>>> But if you are getting then gradually, then after NGX_AGAIN you should
>>> set event handlers and timer and return control to nginx.
>>>
>> As suggested, I have accumulate all the data in a buffer chain.
>> The data in my test is about 3.7 MB (an mp3) and, finally, the whole 
>> content is sent to the client.
>>
>> However ngx_http_output_filter still returns NGX_AGAIN, and I have noted 
>> that Firefox when loading the data does not see the end of the stream.
> 
> Have you set last_buf in last buf ?
> 

Yes.

It seems that the cause was a return NGX_AGAIN, instead of NGX_OK.

So, please let me know if I'm right:

- when a call to ngx_http_output_filter with a buffer chain of *only
   one* buffer returns with NGX_OK, it means that the *entire* buffer
   has been sent to the client.

   If nginx return NGX_AGAIN then I have to set the
   request->write_event_handler so that my handler is executed when the
   socket is ready to send data again.

   I have to setup a timer, so that I can timeout the connection when
   I can't write after a certain amount of time?

- when I pass a "full" buffer chain to ngx_http_output_filter,
   all the low level stuff (sending piece of data when the socket is
   ready) is done by nginx.

   This means that I can treat NGX_AGAIN as NGX_OK, however when
   ngx_http_output_filter returns I have no guarantee the all the buffers
   has been sent to the client.

   This also means that the ngx_chain_t variable must be allocate
   dynamically?

   What happens in case of errors?


P.S.
The source code of what I'm doing can be found here:
http://hg.mperillo.ath.cx/nginx/mod_pg/


Its a module for a direct interface with a PostgreSQL database.
The current version implemente a limited support for large objects.


The purpose of this module, beside offer access to PostgreSQL large 
objects for people who like to keep files in the database, is to offer 
the possibility to run queries without an intermediate web application.

The query result will be returned encoded in JSON format, so that AJAX 
applications can use it.



Thanks and regards   Manlio Perillo





More information about the nginx mailing list