i understand that NGX_AGAIN is returned when a chain could not be send
because more data cannot be buffered on that socket.
I need to understand the following: in my case, when i receive a request, i
start a timer every 10ms and send out some data, then i create a new timer
every10ms until i decide to finish sending out data (video frames).
But if in some triggered callback by the timer the
ngx_http_output_filter(..) returns NGX_AGAIN *i assume* NginX will send
that chain as soon as the socket becomes available again. But after that
happens, how can i restore my timer cycle ?
I'm trying to understand how the shared memory pool works inside the Nginx.
To do that, I made a very small module which create a shared memory zone
with 2097152 bytes,
and allocating and freeing blocks of memory, starting from 0 and increasing
by 1kb until the allocation fails.
The strange parts to me were:
- the maximum block I could allocate was 128000 bytes
- each time the allocation fails, I started again from 0, but the maximum
allocated block changed with the following profile
This is the expected behavior?
Can anyone help me explaining how shared memory works?
I have another module which do an intensive shared memory usage, and
understanding this can help me improve it solving some "no memory" messages.
I put the code in attach.
Here is another round of SO_REUSEPORT support. The plot is changed a
little bit to allow smooth configure reloading and binary upgrading.
Here is what happens when so_reuseport is enable (this does not affect
single process model):
- Master creates the listen sockets w/ SO_REUSEPORT, but does not configure them
- The first worker process will inherit the listen sockets created by
master and configure them
- After master forked the first worker process all listen sockets are closed
- The rest of the workers will create their own listen sockets w/ SO_REUSEPORT
- During binary upgrade, listen sockets are no longer passed through
environment variables, since new master will create its own listen
sockets. Well, the old master actually does not have any listen
sockets opened :).
The idea behind this plot is that at any given time, there is always
one listen socket left, which could inherit the syncaches and pending
sockets on the to-be-closed listen sockets. The inheritance itself is
handled by the kernel; I implemented this inheritance for DragonFlyBSD
I am not tracking Linux's code, but I think Linux side will
eventually get (or already got) the proper fix.
The patch itself:
Configuration reloading and binary upgrading will not be interfered as
w/ the first 2 patches.
Binary upgrading reverting method 1 ("Send the HUP signal to the old
master process. ...") will not be interfered as w/ the first 2
patches. There still could be some glitch (but not that worse as w/
the first 2 patches) if binary upgrading reverting method 2 ("Send the
TERM signal to the new master process. ...") is used. I think we
probably just need to mention that in the document.
Tomorrow Will Never Die
Here are patches for OCSP stapling support. Testing and
Specifies a file with CA certificates in the PEM format used for
certificate verification. In contrast to ssl_client_certificate, DNs
of these certificates aren't sent to a client in CertificateRequest.
Activates OCSP stapling.
Use predefined OCSP response for stapling, do not query responder.
Assumes OCSP response in DER format as produced by "openssl ocsp".
Use specified OCSP responder instead of one found in AIA certificate
listen 443 ssl;
- Unless externally set OCSP response is used (via the "ssl_stapling_file"
directive), stapled response won't be sent in a first connection. This
is due to the fact that OCSP responders are currently queried by nginx
once it receives connection with certificate_status extension in ClientHello,
and due to limitations in OpenSSL API (certificate status callback is
- Cached OCSP responses are currently stored in local process memory (thus
each worker process will query OCSP responders independently). This
shouldn't be a problem as typical number of worker processes is low, usually
set match number of CPUs.
- Various timeouts are hardcoded (connect/read/write timeouts are 60s,
response is considered to be valid for 1h after loading). Adding
configuration directives to control these would be trivial, but it may
be a better idea to actually omit them for simplicity.
- Only "http://" OCSP responders are recognized.
Patch can be found here:
Thanks to Comodo, DigiCert and GlobalSign for sponsoring this work.
I'm thinking on design of patch for adding distributed SSL session cache
and have a question -
is it possible and ok to create keepalive upstream to some storage
(memcached/redis/etc), then use it from