[no subject]

François Battail fb at francois.battail.name
Sun Apr 13 17:13:51 MSD 2008


Manlio Perillo <manlio_perillo at ...> writes:

 
> What type of connection do you want to create?

I don't want to create a connection, just to split the processing of a HTTP
request in two without blocking Nginx.

HTTP request
-> handler
   send a message to one of my workers
   returns NGX_AGAIN

-> worker (running on an independant thread)
   wait for a message
   do the job (using blocking calls)
   "wake up" the HTTP handler with a Nginx event

-> handler
   process the worker's results
   finalize the HTTP reply

Hope it's more clear written that way.
   
> >> Is my_http_worker in a separate thread?
> >> Then this will not work, Nginx is not thread safe.
> > 
> > Yes there are separate threads launched for each Nginx worker process, 
> > it's not an issue as a request is linked to a worker process, just a 
> > matter of mutex; nothing to scare me! 
> 
> The problem is with:
> ngx_post_event
> 
> as far as I know it is not thread safe, unless you enable threads 
> support in Nginx (but in current version it is broken).

Yes, you're correct. But I don't want to activate thread support in Nginx,
I would like to create my specific threads *inside* a Nginx worker. I just 
need to protect the event queue of Nginx from my threads with a mutex as 
far I've seen.

> > The mix using non preemptive model and worker thread is
> > very interesting but not so easy to do.
> > 
> 
> What are you trying to do?

Simply to call blocking functions during a HTTP request without blocking 
Nginx and without using an upstream server. In my project the database (Sqlite3)
is embedded into the web server and share commons memory pools, 
that way I can process XML documents stored in the database without 
allocating or copying data but only by using a list of fragment buffers 
for the reply which fit nicely with ngx_chain_t. I hope it will be very 
fast to process a dynamic request.

Your code is a great help for me, thank you.

Best regards






More information about the nginx mailing list