i understand that NGX_AGAIN is returned when a chain could not be send
because more data cannot be buffered on that socket.
I need to understand the following: in my case, when i receive a request, i
start a timer every 10ms and send out some data, then i create a new timer
every10ms until i decide to finish sending out data (video frames).
But if in some triggered callback by the timer the
ngx_http_output_filter(..) returns NGX_AGAIN *i assume* NginX will send
that chain as soon as the socket becomes available again. But after that
happens, how can i restore my timer cycle ?
Akos Gyimesi reported a request hang (downstream connections stuck in
the CLOSE_WAIT state forever) regarding use of proxy_cache_lock in
The issue is that when proxy_cache_lock_timeout is reached,
r->connection->write->handler() directly, but
r->connection->write->handler is (usually) just
ngx_http_request_handler, which simply picks up r->connection->data,
which is *not* necessarily the current (sub)request, so the current
subrequest may never be continued nor finalized, leading to an
infinite request hang.
The following patch fixes this issue for me. Comments welcome!
--- nginx-1.4.3/src/http/ngx_http_file_cache.c 2013-10-08
+++ nginx-1.4.3-patched/src/http/ngx_http_file_cache.c 2013-10-26
@@ -432,6 +432,7 @@ ngx_http_file_cache_lock_wait_handler(ng
+ ngx_connection_t *conn;
@@ -471,7 +472,10 @@ wakeup:
c->waiting = 0;
+ conn = r->connection;
I'm trying to understand how the shared memory pool works inside the Nginx.
To do that, I made a very small module which create a shared memory zone
with 2097152 bytes,
and allocating and freeing blocks of memory, starting from 0 and increasing
by 1kb until the allocation fails.
The strange parts to me were:
- the maximum block I could allocate was 128000 bytes
- each time the allocation fails, I started again from 0, but the maximum
allocated block changed with the following profile
This is the expected behavior?
Can anyone help me explaining how shared memory works?
I have another module which do an intensive shared memory usage, and
understanding this can help me improve it solving some "no memory" messages.
I put the code in attach.
all nginx’s developers.
task module development for nginx.
receipt of the request, permitting it, we forward the request to the backend
(e.g. other web-server) using the modified module
(ngx_http_proxy_module). Upon receipt the response from the server, we need to
modify the response (buffers chunks) and send it to frontend (e.g. browser). We
have tried to connect to the stream via:
But but did
not get the desired result.
solve this problem? How to we need to connect to the processing chain: as a
forward to any help. Thanks in advance.
Best regards, Edelkin Dmitry, the developer
(Department of Applied Systems, ext. tel. 1624)
Russian Federation, Republic of Tatarstan, Kazan
city, group of companies «CENTER»
telephone number: +7 (843) 533 88 00
This is a response to rev 4746 which removed ETags. 4746 removes the ETag field
from the header in all instances where content is modified by the web server
prior to being sent to the requesting client. This is far more stringent than
required by the HTTP spec.
The HTTP spec requires that strict ETags be dependent on the variant that is
returned by the server. While removing all ETags from these variants
*technically* meets the spec, it is a bit extreme.
This commit modifies the ngx_http_clear_etag macro to check if the ETag is
marked as a weak ETag. IFF that case, the ETag is retained, and not dropped.
Longer term, a better solution would be to completely remove
ngx_http_clear_strict_etag and replace it with functions to generate a strict
ETag for a variant prior to sending a response to the client, provided that
there is not a weak ETag field already included.
On 06/02/13 17:24, Primoz Bratanic wrote:
> Apache supports specifying multiple certificates (different types) for same
> host in line with OpenSSL support (RSA, DSA, ECC). This allows using ECC key
> exchange methods with clients that support it and it's backwards compatible.
> I wonder how much work would it be to add support for this to nginx. Is it
> just allowing specifying 2-3 certificates (and checking they have different
> key type) + adding support for returning proper key chain or are the any
> other obvious roadblocks (that are not obvious to me).
Here's a first stab at a patch. I hope this is a useful starting point
for getting this feature added to Nginx.
To specify an RSA cert plus an ECC cert, use...
ssl_certificate my_rsa.crt my_ecc.crt;
ssl_certificate_key my_rsa.key my_ecc.key;
Also, configure ssl_ciphers to prefer at least 1 ECDSA cipher and permit
at least 1 RSA cipher.
I think DSA certs should work too, but I've not tested this.
Issues I'm aware of with this patch:
- It doesn't check that each of the certs has a different key type
(but perhaps it should). If you specify multiple certs with the same
algorithm, all but the last one will be ignored.
- The certs and keys need to be specified in the correct order. If
you specify "my_rsa.crt my_ecc.crt" and "my_ecc.key my_rsa.key", Nginx
will start but it won't be able to complete any SSL handshakes. This
could be improved.
- It doesn't add the new feature to mail_ssl_module. Perhaps it should.
- The changes I made to ngx_conf_set_str_array_slot() work for me,
but do they break anything?
- An RSA cert and an ECC cert might well be issued by different CAs.
On Apache httpd, you have to use SSLCACertificatePath to persuade
OpenSSL to send different Intermediate certs for each one.
Nginx doesn't currently have an equivalent directive, and Maxim has
previously said it's unlikely to be added .
I haven't researched this properly yet, but I think it might be possible
to do "certificate path" in memory (i.e. without syscalls and disk
access on each certificate check) using the OpenSSL X509_LOOKUP API.
- I expect Maxim will have other comments. :-)
Senior Research & Development Scientist
COMODO - Creating Trust Online