How to avoid blocking Nginx with long request
mdounin at mdounin.ru
Wed Apr 10 13:40:55 UTC 2013
On Wed, Apr 10, 2013 at 01:26:15PM +0000, MAGNIEN, Thierry wrote:
> I'm writing an Nginx module that uses information stored in
> memory to redirect requests to other servers. Basically when a
> GET requests arrives, it makes some checks and decides to which
> Location the requests shall be redirected. In order to have
> Nginx update the information it holds in memory, I send him a
> specific POST request to trigger it.
> However, reloading information takes quite a lot of time and I
> have some questions related with this:
> - while the POST request is handled in my module, the worker
> that took the request is blocked until it has finished
> processing, but if GET requests come in, are they handled by
> other workers or can I have some GET requests getting blocked ?
If a worker process is blocked, it can't accept new connections.
But connections already accepted, e.g. just before the POST
request in question, are bound to the worker and will be blocked
till you return to the event loop.
> - if I want the processing not to block, can I use an event
> timer, in order to release the worker quickly and have the
> processing take place "in background" ? Or will it block a
> worker anyway ?
There is no such thing as "in background". As nginx is event
driven, everything happens in event handlers, and timers are
events as well. That is, just running a task in a timer handler
will equally block the worker process.
What might help is splitting the task into smaller ones and
returning to the event loop inbetween, to let other requests time
to be served. This may be done e.g. via 1ms timer.
Alternatively, you may consider doing a reload with standard nginx
configuration reload mechanism. This way all blocking operations
are done in master process, then new workers with updated
configuration are spawned to handle requests.
More information about the nginx-devel