Hello!
Here are patches for OCSP stapling support. Testing and
review appreciated.
New directives:
ssl_trusted_certificate /path/to/file;
Specifies a file with CA certificates in the PEM format used for
certificate verification. In contrast to ssl_client_certificate, DNs
of these certificates aren't sent to a client in CertificateRequest.
ssl_stapling on|off;
Activates OCSP stapling.
ssl_stapling_file /path/to/file;
Use predefined OCSP response for stapling, do not query responder.
Assumes OCSP response in DER format as produced by "openssl ocsp".
ssl_stapling_responder URL;
Use specified OCSP responder instead of one found in AIA certificate
extension.
Example configuration:
server {
listen 443 ssl;
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/key.pem;
ssl_stapling on;
ssl_trusted_certificate /path/to/ca.pem;
resolver 8.8.8.8;
}
Known limitations:
- Unless externally set OCSP response is used (via the "ssl_stapling_file"
directive), stapled response won't be sent in a first connection. This
is due to the fact that OCSP responders are currently queried by nginx
once it receives connection with certificate_status extension in ClientHello,
and due to limitations in OpenSSL API (certificate status callback is
blocking).
- Cached OCSP responses are currently stored in local process memory (thus
each worker process will query OCSP responders independently). This
shouldn't be a problem as typical number of worker processes is low, usually
set match number of CPUs.
- Various timeouts are hardcoded (connect/read/write timeouts are 60s,
response is considered to be valid for 1h after loading). Adding
configuration directives to control these would be trivial, but it may
be a better idea to actually omit them for simplicity.
- Only "http://" OCSP responders are recognized.
Patch can be found here:
http://nginx.org/patches/ocsp-stapling/
Thanks to Comodo, DigiCert and GlobalSign for sponsoring this work.
Maxim Dounin
Hello,
I have written a module to implement sFlow in nginx (nginx-sflow-module.googlecode.com). I'm simulating a 1-second timer-tick by assuming that the request handler will be called at least once per second. That's probably a safe assumption for any server that would care about sFlow monitoring, but I expect there's a better way...
I tried asking for a timer callback like this:
ngx_event_t *ev = ngx_pcalloc(pool, sizeof(ngx_event_t));
ev->hander = ngx_http_sflow_tick_event_hander;
ngx_add_timer(ev, 1000);
but (like most russian girls) the event never called me back. It looks like I might have to hang this on a file-descriptor somehow, but that's where I'm getting lost. Any pointers would be most appreciated.
Neil
Hello!
According to the current implementation of ngx_http_upstream, there is
almost no way for 3rd-party output body filters and "post_subrequest"
handlers (in the subrequest context) to know if there's any errors
while ngx_http_upstream is processing the upstream response body after
the response header is sent out (in both the buffered and non-buffered
modes).
For example, if
1. a read-timeout happens in the middle of the process of reading the
upstream response body,
2. or the upstream connection is closed prematurely in the same situation,
then ngx_http_upstream will just happily finalize the current request
with the status code 0 (i.e., NGX_OK). This issue already affects at
least our ngx_srcache and ngx_lua modules (originally reported by
Bryan Alger).
Here attaches a patch that makes ngx_http_upstream set
r->headers_out.status to a new error status code to notify the outside
world if there is a problem. Comments will be highly appreciated as
always :)
Thanks!
-agentzh
--- nginx-1.2.3/src/http/ngx_http_upstream.c 2012-08-06 10:34:08.000000000 -0700
+++ nginx-1.2.3-patched/src/http/ngx_http_upstream.c 2012-09-09
21:58:04.727761891 -0700
@@ -2383,7 +2383,7 @@
if (c->read->timedout) {
ngx_connection_error(c, NGX_ETIMEDOUT, "upstream timed out");
- ngx_http_upstream_finalize_request(r, u, 0);
+ ngx_http_upstream_finalize_request(r, u, NGX_HTTP_GATEWAY_TIME_OUT);
return;
}
@@ -2430,13 +2430,17 @@
if (u->busy_bufs == NULL) {
if (u->length == 0
- || upstream->read->eof
- || upstream->read->error)
+ || (upstream->read->eof &&
u->headers_in.content_length_n == -1))
{
ngx_http_upstream_finalize_request(r, u, 0);
return;
}
+ if (upstream->read->eof || upstream->read->error) {
+ ngx_http_upstream_finalize_request(r, u,
NGX_HTTP_BAD_GATEWAY);
+ return;
+ }
+
b->pos = b->start;
b->last = b->start;
}
@@ -2710,7 +2714,16 @@
#if 0
ngx_http_busy_unlock(u->conf->busy_lock, &u->busy_lock);
#endif
- ngx_http_upstream_finalize_request(r, u, 0);
+
+ if (p->upstream_done
+ || (p->upstream_eof && u->headers_in.content_length_n == -1))
+ {
+ ngx_http_upstream_finalize_request(r, u, 0);
+
+ } else {
+ ngx_http_upstream_finalize_request(r, u, NGX_HTTP_BAD_GATEWAY);
+ }
+
return;
}
}
@@ -3073,6 +3086,13 @@
&& rc != NGX_HTTP_REQUEST_TIME_OUT
&& (rc == NGX_ERROR || rc >= NGX_HTTP_SPECIAL_RESPONSE))
{
+ if (rc == NGX_ERROR) {
+ r->headers_out.status = NGX_HTTP_INTERNAL_SERVER_ERROR;
+
+ } else {
+ r->headers_out.status = rc;
+ }
+
rc = 0;
}
Hi,
It is not clear to me how to avoid blocking the nginx reactor loop when
creating an nginx module which should perform some long I/O operations
and return the response to the client. Or is this handled internally by
Nginx?
Any hints are appreciated.
Thanks,
Alex
Author: vbart
Date: 2013-02-27 17:41:34 +0000 (Wed, 27 Feb 2013)
New Revision: 5096
URL: http://trac.nginx.org/nginx/changeset/5096/nginx
Log:
SNI: added restriction on requesting host other than negotiated.
According to RFC 6066, client is not supposed to request a different server
name at the application layer. Server implementations that rely upon these
names being equal must validate that a client did not send a different name
in HTTP request. Current versions of Apache HTTP server always return 400
"Bad Request" in such cases.
There exist implementations however (e.g., SPDY) that rely on being able to
request different host names in one connection. Given this, we only reject
requests with differing host names if verification of client certificates
is enabled in a corresponding server configuration.
An example of configuration that might not work as expected:
server {
listen 433 ssl default;
return 404;
}
server {
listen 433 ssl;
server_name example.org;
ssl_client_certificate org.cert;
ssl_verify_client on;
}
server {
listen 433 ssl;
server_name example.com;
ssl_client_certificate com.cert;
ssl_verify_client on;
}
Previously, a client was able to request example.com by presenting
a certificate for example.org, and vice versa.
Modified:
trunk/src/http/ngx_http_request.c
Modified: trunk/src/http/ngx_http_request.c
===================================================================
--- trunk/src/http/ngx_http_request.c 2013-02-27 17:38:54 UTC (rev 5095)
+++ trunk/src/http/ngx_http_request.c 2013-02-27 17:41:34 UTC (rev 5096)
@@ -1872,10 +1872,22 @@
#ifdef SSL_CTRL_SET_TLSEXT_HOSTNAME
if (hc->ssl_servername) {
+ ngx_http_ssl_srv_conf_t *sscf;
+
if (rc == NGX_DECLINED) {
cscf = hc->addr_conf->default_server;
rc = NGX_OK;
}
+
+ sscf = ngx_http_get_module_srv_conf(cscf->ctx, ngx_http_ssl_module);
+
+ if (sscf->verify) {
+ ngx_log_error(NGX_LOG_INFO, r->connection->log, 0,
+ "client attempted to request the server name "
+ "different from that one was negotiated");
+ ngx_http_finalize_request(r, NGX_HTTP_BAD_REQUEST);
+ return NGX_ERROR;
+ }
}
#endif
Author: vbart
Date: 2013-02-27 17:38:54 +0000 (Wed, 27 Feb 2013)
New Revision: 5095
URL: http://trac.nginx.org/nginx/changeset/5095/nginx
Log:
SNI: reset to default server if requested host was not found.
Not only this is consistent with a case without SNI, but this also
prevents abusing configurations that assume that the $host variable
is limited to one of the configured names for a server.
An example of potentially unsafe configuration:
server {
listen 443 ssl default_server;
...
}
server {
listen 443;
server_name example.com;
location / {
proxy_pass http://$host;
}
}
Note: it is possible to negotiate "example.com" by SNI, and to request
arbitrary host name that does not exist in the configuration above.
Modified:
trunk/src/http/ngx_http_request.c
Modified: trunk/src/http/ngx_http_request.c
===================================================================
--- trunk/src/http/ngx_http_request.c 2013-02-27 17:33:59 UTC (rev 5094)
+++ trunk/src/http/ngx_http_request.c 2013-02-27 17:38:54 UTC (rev 5095)
@@ -1869,6 +1869,17 @@
return NGX_ERROR;
}
+#ifdef SSL_CTRL_SET_TLSEXT_HOSTNAME
+
+ if (hc->ssl_servername) {
+ if (rc == NGX_DECLINED) {
+ cscf = hc->addr_conf->default_server;
+ rc = NGX_OK;
+ }
+ }
+
+#endif
+
if (rc == NGX_DECLINED) {
return NGX_OK;
}
Author: vbart
Date: 2013-02-27 17:27:15 +0000 (Wed, 27 Feb 2013)
New Revision: 5093
URL: http://trac.nginx.org/nginx/changeset/5093/nginx
Log:
Apply server configuration as soon as host is known.
Previously, this was done only after the whole request header
was parsed, and if an error occurred earlier then the request
was processed in the default server (or server chosen by SNI),
while r->headers_in.server might be set to the value from the
Host: header or host from request line.
r->headers_in.server is in turn used for $host variable and
in HTTP redirects if "server_name_in_redirect" is disabled.
Without the change, configurations that rely on this during
error handling are potentially unsafe if SNI is used.
This change also allows to use server specific settings of
"underscores_in_headers", "ignore_invalid_headers", and
"large_client_header_buffers" directives for HTTP requests
and HTTPS requests without SNI.
Modified:
trunk/src/http/ngx_http_request.c
Modified: trunk/src/http/ngx_http_request.c
===================================================================
--- trunk/src/http/ngx_http_request.c 2013-02-27 17:21:21 UTC (rev 5092)
+++ trunk/src/http/ngx_http_request.c 2013-02-27 17:27:15 UTC (rev 5093)
@@ -919,13 +919,18 @@
return;
}
+ if (ngx_http_set_virtual_server(r, &host) == NGX_ERROR) {
+ return;
+ }
+
r->headers_in.server = host;
}
if (r->http_version < NGX_HTTP_VERSION_10) {
- if (ngx_http_set_virtual_server(r, &r->headers_in.server)
- == NGX_ERROR)
+ if (r->headers_in.server.len == 0
+ && ngx_http_set_virtual_server(r, &r->headers_in.server)
+ == NGX_ERROR)
{
return;
}
@@ -1014,7 +1019,6 @@
}
cmcf = ngx_http_get_module_main_conf(r, ngx_http_core_module);
- cscf = ngx_http_get_module_srv_conf(r, ngx_http_core_module);
rc = NGX_AGAIN;
@@ -1068,6 +1072,9 @@
}
}
+ /* the host header could change the server configuration context */
+ cscf = ngx_http_get_module_srv_conf(r, ngx_http_core_module);
+
rc = ngx_http_parse_header_line(r, r->header_in,
cscf->underscores_in_headers);
@@ -1444,6 +1451,10 @@
return NGX_OK;
}
+ if (ngx_http_set_virtual_server(r, &host) == NGX_ERROR) {
+ return NGX_ERROR;
+ }
+
r->headers_in.server = host;
return NGX_OK;
@@ -1570,7 +1581,10 @@
static ngx_int_t
ngx_http_process_request_header(ngx_http_request_t *r)
{
- if (ngx_http_set_virtual_server(r, &r->headers_in.server) == NGX_ERROR) {
+ if (r->headers_in.server.len == 0
+ && ngx_http_set_virtual_server(r, &r->headers_in.server)
+ == NGX_ERROR)
+ {
return NGX_ERROR;
}