i understand that NGX_AGAIN is returned when a chain could not be send
because more data cannot be buffered on that socket.
I need to understand the following: in my case, when i receive a request, i
start a timer every 10ms and send out some data, then i create a new timer
every10ms until i decide to finish sending out data (video frames).
But if in some triggered callback by the timer the
ngx_http_output_filter(..) returns NGX_AGAIN *i assume* NginX will send
that chain as soon as the socket becomes available again. But after that
happens, how can i restore my timer cycle ?
Akos Gyimesi reported a request hang (downstream connections stuck in
the CLOSE_WAIT state forever) regarding use of proxy_cache_lock in
The issue is that when proxy_cache_lock_timeout is reached,
r->connection->write->handler() directly, but
r->connection->write->handler is (usually) just
ngx_http_request_handler, which simply picks up r->connection->data,
which is *not* necessarily the current (sub)request, so the current
subrequest may never be continued nor finalized, leading to an
infinite request hang.
The following patch fixes this issue for me. Comments welcome!
--- nginx-1.4.3/src/http/ngx_http_file_cache.c 2013-10-08
+++ nginx-1.4.3-patched/src/http/ngx_http_file_cache.c 2013-10-26
@@ -432,6 +432,7 @@ ngx_http_file_cache_lock_wait_handler(ng
+ ngx_connection_t *conn;
@@ -471,7 +472,10 @@ wakeup:
c->waiting = 0;
+ conn = r->connection;
I'm trying to understand how the shared memory pool works inside the Nginx.
To do that, I made a very small module which create a shared memory zone
with 2097152 bytes,
and allocating and freeing blocks of memory, starting from 0 and increasing
by 1kb until the allocation fails.
The strange parts to me were:
- the maximum block I could allocate was 128000 bytes
- each time the allocation fails, I started again from 0, but the maximum
allocated block changed with the following profile
This is the expected behavior?
Can anyone help me explaining how shared memory works?
I have another module which do an intensive shared memory usage, and
understanding this can help me improve it solving some "no memory" messages.
I put the code in attach.
Before I send my main request and process the response through create_request and process_header (and filter) callbacks, I need to have a short handshake with the upstream servers. It consists of a send() and a recv() from the upstream module. How to implement this?
Would the following sequence work?States: H = do-handshake (initial state), R = do-request1 In state H, send handshake req first through create_request()2 In process_header() in state == H 2.1 call 'create_request' again with state set to R, so main request gets created 2.2 call 'ngx_http_upstream_send_request' manually to restart the req-response cycle3 Because of 2.2 we get a process_header() call in state R
I was playing around with TCP FastOpen (an experimental TCP extension supported for clients in Linux >=3.6 and for servers in Linux >=3.7.1) that reduces the initial TCP 3-way handshake latency cost.
I noticed a couple of other projects (HAProxy in particular) have support for it, and it may be useful to some people to reduce latency within their stack or experiment with it for broader deployment. It's a pretty small and non invasive change too, so following this is a patch that optionally enables it on a listener if you specify fastopen=<pending TFO limit> e.g.
>> listen 443 ssl fastopen=5;
I'm using the mail module of nginx to proxy and loadbalance IMAP+POP3
connections to backend servers. I do authentication via http authentication.
Some Clients are sending IMAP ID commands to the server with information
about their software and version. I would like to log that and maybe use
that during authentication. It would be great if nginx would support
that command, and if it has been sent before LOGIN, also provide the
information to the http authentication script.
IMAP ID can be found in RFC 2971: http://www.faqs.org/rfcs/rfc2971.html
The information about the client could be used to route the user to a
specific backend server that has some client-specific IMAP bug fixes in
place, or could be used to restrict logins of a specific user to only
one specific client if the user wants that for slightly higher security,
or for a list of "last activity of the user" like GMail does, or just
for client statistics.
I'm not a C programmer, so sadly I cannot write a patch myself for this,
but maybe someone is able to add this small feature?