i understand that NGX_AGAIN is returned when a chain could not be send
because more data cannot be buffered on that socket.
I need to understand the following: in my case, when i receive a request, i
start a timer every 10ms and send out some data, then i create a new timer
every10ms until i decide to finish sending out data (video frames).
But if in some triggered callback by the timer the
ngx_http_output_filter(..) returns NGX_AGAIN *i assume* NginX will send
that chain as soon as the socket becomes available again. But after that
happens, how can i restore my timer cycle ?
Akos Gyimesi reported a request hang (downstream connections stuck in
the CLOSE_WAIT state forever) regarding use of proxy_cache_lock in
The issue is that when proxy_cache_lock_timeout is reached,
r->connection->write->handler() directly, but
r->connection->write->handler is (usually) just
ngx_http_request_handler, which simply picks up r->connection->data,
which is *not* necessarily the current (sub)request, so the current
subrequest may never be continued nor finalized, leading to an
infinite request hang.
The following patch fixes this issue for me. Comments welcome!
--- nginx-1.4.3/src/http/ngx_http_file_cache.c 2013-10-08
+++ nginx-1.4.3-patched/src/http/ngx_http_file_cache.c 2013-10-26
@@ -432,6 +432,7 @@ ngx_http_file_cache_lock_wait_handler(ng
+ ngx_connection_t *conn;
@@ -471,7 +472,10 @@ wakeup:
c->waiting = 0;
+ conn = r->connection;
I'm trying to understand how the shared memory pool works inside the Nginx.
To do that, I made a very small module which create a shared memory zone
with 2097152 bytes,
and allocating and freeing blocks of memory, starting from 0 and increasing
by 1kb until the allocation fails.
The strange parts to me were:
- the maximum block I could allocate was 128000 bytes
- each time the allocation fails, I started again from 0, but the maximum
allocated block changed with the following profile
This is the expected behavior?
Can anyone help me explaining how shared memory works?
I have another module which do an intensive shared memory usage, and
understanding this can help me improve it solving some "no memory" messages.
I put the code in attach.
all nginx’s developers.
task module development for nginx.
receipt of the request, permitting it, we forward the request to the backend
(e.g. other web-server) using the modified module
(ngx_http_proxy_module). Upon receipt the response from the server, we need to
modify the response (buffers chunks) and send it to frontend (e.g. browser). We
have tried to connect to the stream via:
But but did
not get the desired result.
solve this problem? How to we need to connect to the processing chain: as a
forward to any help. Thanks in advance.
Best regards, Edelkin Dmitry, the developer
(Department of Applied Systems, ext. tel. 1624)
Russian Federation, Republic of Tatarstan, Kazan
city, group of companies «CENTER»
telephone number: +7 (843) 533 88 00
Does anyone have experience integrating zeromq with Nginx. I am looking
for some pointers, to see what concerns I should look out for.
I am trying to contribute this code to a open source project.
Looking at the documentation it seems there is no way to specify a proxy
bind address for both IPv4 and IPv6.
You can specify one or the other, but never both. This is a particular
issue when a configuration is setup to allow for a failure in IPv6 transit
Is it possible to get a proxy_bind_v6 directive?