hi,
i understand that NGX_AGAIN is returned when a chain could not be send
because more data cannot be buffered on that socket.
I need to understand the following: in my case, when i receive a request, i
start a timer every 10ms and send out some data, then i create a new timer
every10ms until i decide to finish sending out data (video frames).
But if in some triggered callback by the timer the
ngx_http_output_filter(..) returns NGX_AGAIN *i assume* NginX will send
that chain as soon as the socket becomes available again. But after that
happens, how can i restore my timer cycle ?
thnks.
J.Z.
Hi,
I'm trying to understand how the shared memory pool works inside the Nginx.
To do that, I made a very small module which create a shared memory zone
with 2097152 bytes,
and allocating and freeing blocks of memory, starting from 0 and increasing
by 1kb until the allocation fails.
The strange parts to me were:
- the maximum block I could allocate was 128000 bytes
- each time the allocation fails, I started again from 0, but the maximum
allocated block changed with the following profile
128000
87040
70656
62464
58368
54272
50176
46080
41984
37888
33792
29696
This is the expected behavior?
Can anyone help me explaining how shared memory works?
I have another module which do an intensive shared memory usage, and
understanding this can help me improve it solving some "no memory" messages.
I put the code in attach.
Thanks,
Wandenberg
Hi all,
Here is another round of SO_REUSEPORT support. The plot is changed a
little bit to allow smooth configure reloading and binary upgrading.
Here is what happens when so_reuseport is enable (this does not affect
single process model):
- Master creates the listen sockets w/ SO_REUSEPORT, but does not configure them
- The first worker process will inherit the listen sockets created by
master and configure them
- After master forked the first worker process all listen sockets are closed
- The rest of the workers will create their own listen sockets w/ SO_REUSEPORT
- During binary upgrade, listen sockets are no longer passed through
environment variables, since new master will create its own listen
sockets. Well, the old master actually does not have any listen
sockets opened :).
The idea behind this plot is that at any given time, there is always
one listen socket left, which could inherit the syncaches and pending
sockets on the to-be-closed listen sockets. The inheritance itself is
handled by the kernel; I implemented this inheritance for DragonFlyBSD
recently (http://gitweb.dragonflybsd.org/dragonfly.git/commit/02ad2f0b874fb0a45eb6975…).
I am not tracking Linux's code, but I think Linux side will
eventually get (or already got) the proper fix.
The patch itself:
http://leaf.dragonflybsd.org/~sephe/ngx_soreuseport3.diff
Configuration reloading and binary upgrading will not be interfered as
w/ the first 2 patches.
Binary upgrading reverting method 1 ("Send the HUP signal to the old
master process. ...") will not be interfered as w/ the first 2
patches. There still could be some glitch (but not that worse as w/
the first 2 patches) if binary upgrading reverting method 2 ("Send the
TERM signal to the new master process. ...") is used. I think we
probably just need to mention that in the document.
Best Regards,
sephe
--
Tomorrow Will Never Die
Hello!
Here are patches for OCSP stapling support. Testing and
review appreciated.
New directives:
ssl_trusted_certificate /path/to/file;
Specifies a file with CA certificates in the PEM format used for
certificate verification. In contrast to ssl_client_certificate, DNs
of these certificates aren't sent to a client in CertificateRequest.
ssl_stapling on|off;
Activates OCSP stapling.
ssl_stapling_file /path/to/file;
Use predefined OCSP response for stapling, do not query responder.
Assumes OCSP response in DER format as produced by "openssl ocsp".
ssl_stapling_responder URL;
Use specified OCSP responder instead of one found in AIA certificate
extension.
Example configuration:
server {
listen 443 ssl;
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/key.pem;
ssl_stapling on;
ssl_trusted_certificate /path/to/ca.pem;
resolver 8.8.8.8;
}
Known limitations:
- Unless externally set OCSP response is used (via the "ssl_stapling_file"
directive), stapled response won't be sent in a first connection. This
is due to the fact that OCSP responders are currently queried by nginx
once it receives connection with certificate_status extension in ClientHello,
and due to limitations in OpenSSL API (certificate status callback is
blocking).
- Cached OCSP responses are currently stored in local process memory (thus
each worker process will query OCSP responders independently). This
shouldn't be a problem as typical number of worker processes is low, usually
set match number of CPUs.
- Various timeouts are hardcoded (connect/read/write timeouts are 60s,
response is considered to be valid for 1h after loading). Adding
configuration directives to control these would be trivial, but it may
be a better idea to actually omit them for simplicity.
- Only "http://" OCSP responders are recognized.
Patch can be found here:
http://nginx.org/patches/ocsp-stapling/
Thanks to Comodo, DigiCert and GlobalSign for sponsoring this work.
Maxim Dounin
details: http://hg.nginx.org/nginx/rev/aadfadd5af2b
branches:
changeset: 5289:aadfadd5af2b
user: Maxim Dounin <mdounin(a)mdounin.ru>
date: Fri Jun 14 20:56:07 2013 +0400
description:
Fixed ngx_http_test_reading() to finalize request properly.
Previous code called ngx_http_finalize_request() with rc = 0. This is
ok if a response status was already set, but resulted in "000" being
logged if it wasn't. In particular this happened with limit_req
if a connection was prematurely closed during limit_req delay.
diffstat:
src/http/ngx_http_request.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diffs (12 lines):
diff --git a/src/http/ngx_http_request.c b/src/http/ngx_http_request.c
--- a/src/http/ngx_http_request.c
+++ b/src/http/ngx_http_request.c
@@ -2733,7 +2733,7 @@ closed:
ngx_log_error(NGX_LOG_INFO, c->log, err,
"client prematurely closed connection");
- ngx_http_finalize_request(r, 0);
+ ngx_http_finalize_request(r, NGX_HTTP_CLIENT_CLOSED_REQUEST);
}