Hi all,
The Wikimedia Foundation has been running nginx-1.9.3 patched for
multi-certificate support for all production TLS traffic for a few
weeks now without incident, for all inbound requests to Wikipedia and
other associated projects of the Foundation.
We initially used the older March variant of Filipe's patches (
http://mailman.nginx.org/pipermail/nginx-devel/2015-March/006734.html
), and last week we switched to using the April 27 variant (
http://mailman.nginx.org/pipermail/nginx-devel/2015-April/006863.html
), which is the last known public variant I'm aware of.
These were in turn based on kyprizel's patch (
http://mailman.nginx.org/pipermail/nginx-devel/2015-March/006668.html
), which was based on Rob's patch from nearly two years ago (
http://mailman.nginx.org/pipermail/nginx-devel/2013-October/004376.html
). It has a long and colorful history at this point :)
We've forward-ported Filipe's Apr 27 variant onto Debian's 1.9.3-1
package. Most of the porting was trivial (offsets / whitespace /
etc). There were a couple of slightly more substantial issues around
the newer OCSP Stapling valid-timestamp checking, and the porting of
the general multi-cert work to the newer stream modules. The
ported/updated variant of the patches we're running is available here
in our repo:
https://github.com/wikimedia/operations-software-nginx/blob/wmf-1.9.3-1/deb…
Our configuration uses a pair of otherwise-identical RSA and ECDSA
keys and an external OCSP ssl_stapling_file (certs are from
GlobalSign, chain/OCSP info is identical in the pair). Our typical
relevant config fragment in the server section looks like this:
------------
ssl_certificate /etc/ssl/localcerts/ecc-uni.wikimedia.org.chained.crt;
ssl_certificate_key /etc/ssl/private/ecc-uni.wikimedia.org.key;
ssl_certificate /etc/ssl/localcerts/uni.wikimedia.org.chained.crt;
ssl_certificate_key /etc/ssl/private/uni.wikimedia.org.key;
ssl_stapling on;
ssl_stapling_file /var/cache/ocsp/unified.ocsp;
-------------
Obviously, we'd rather get this work (or something similar) upstreamed
so that we don't have to maintain local patches for this indefinitely,
and so that everyone else can use it easily too. I'm assuming the
reason it wasn't merged in the past is there may be other issues
blocking the merge that just weren't relevant to our particular
configuration, or are just matters of cleanliness or implementation
detail.
I'd be happy to work with whoever on resolving that and getting this
patchset into a merge-able state. Does anyone know what the
outstanding issues were/are? Some of the past list traffic on this is
a bit fragmented.
Thanks,
-- Brandon
Hello!
I currently use 1.8 (stable) nginx. Is there an expected timeline to have
HTTP/2 available as nginx stable? Or backporting HTTP/2 to 1.8.x?
Thanks and Regards
+Fasih
Hi.
* It looks like strings are supposed to finish with '\0' char to be
compatible with C strings. So ngx_pstrdup() must allocate and copy
len+1, not just len.
* ngx_copy() returns different values for different preprocessor conditions.
PS. I have no idea how trac.nginx.org works. I tried to fill a ticket,
but it just lost.
Hi,
>From reading the code and the docs I have gotten the impression that
limit_rate (and limit_rate_after) is per ngx_connection which (I think)
means that it is per HTTP request and not per socket. Am I right in this
conclusion or is the limit actually per socket/TCP connection?
What we are observing is that the limit we configure does only kick in for
requests to files that are larger than the limit_rate_after when the
request is done in one GET request but not when the request is done in
chunks using byte offset parameters (that is - using many GET requests for
the file). So clients can easily avoid the limitations by downloading the
file chunk by chunk rather than in one request.
If our conclusion are right - that the limit is per HTTP request and not
per socket so that a chunked download would not be limited - does anyone
have any suggestion how we would go about to introduce a limit also on
socket level? I don't mind hacking away at the code, but perhaps someone
out there has already looked into this?
/Stefan